Test Report: Hyper-V_Windows 18063

                    
                      9a5d81419c51a6c3c4fef58cf8d1de8416716248:2024-02-29:33343
                    
                

Test fail (54/207)

Order failed test Duration
38 TestAddons/parallel/Registry 81.65
64 TestErrorSpam/setup 181.31
73 TestFunctional/serial/StartWithProxy 211.35
75 TestFunctional/serial/SoftStart 180.84
76 TestFunctional/serial/KubeContext 11.21
77 TestFunctional/serial/KubectlGetPods 11.36
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 8.83
85 TestFunctional/serial/CacheCmd/cache/cache_reload 179.9
87 TestFunctional/serial/MinikubeKubectlCmd 11.56
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 11.16
89 TestFunctional/serial/ExtraConfig 148.62
90 TestFunctional/serial/ComponentHealth 11.32
93 TestFunctional/serial/InvalidService 0.13
95 TestFunctional/parallel/ConfigCmd 1.73
99 TestFunctional/parallel/StatusCmd 51.91
103 TestFunctional/parallel/ServiceCmdConnect 13.52
105 TestFunctional/parallel/PersistentVolumeClaim 13.32
109 TestFunctional/parallel/MySQL 11.61
111 TestFunctional/parallel/CertSync 248.6
115 TestFunctional/parallel/NodeLabels 11.87
120 TestFunctional/parallel/ServiceCmd/DeployApp 0.13
121 TestFunctional/parallel/ServiceCmd/List 9.16
123 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 9.14
124 TestFunctional/parallel/ServiceCmd/JSONOutput 8.03
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
133 TestFunctional/parallel/ServiceCmd/HTTPS 7.78
134 TestFunctional/parallel/ServiceCmd/Format 7.82
135 TestFunctional/parallel/ServiceCmd/URL 7.85
141 TestFunctional/parallel/DockerEnv/powershell 477.21
142 TestFunctional/parallel/UpdateContextCmd/no_changes 2.35
145 TestFunctional/parallel/ImageCommands/ImageListShort 60.02
146 TestFunctional/parallel/ImageCommands/ImageListTable 60.13
147 TestFunctional/parallel/ImageCommands/ImageListJson 59.97
148 TestFunctional/parallel/ImageCommands/ImageListYaml 60.04
149 TestFunctional/parallel/ImageCommands/ImageBuild 120.52
151 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 102.3
152 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 120.46
153 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 120.51
154 TestFunctional/parallel/ImageCommands/ImageSaveToFile 60.33
156 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.36
171 TestIngressAddonLegacy/StartLegacyK8sCluster 402.47
173 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 131.76
174 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 77.19
222 TestMultiNode/serial/PingHostFrom2Pods 54.12
223 TestMultiNode/serial/AddNode 232.89
226 TestMultiNode/serial/CopyFile 65.18
229 TestMultiNode/serial/RestartKeepsNodes 508.81
232 TestMultiNode/serial/RestartMultiNode 190.53
236 TestPreload 274.56
242 TestRunningBinaryUpgrade 929.84
244 TestKubernetesUpgrade 787.67
259 TestNoKubernetes/serial/StartWithK8s 307.04
275 TestPause/serial/SecondStartNoReconfiguration 567.99
324 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 10800.473
x
+
TestAddons/parallel/Registry (81.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 25.5894ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-l9gtr" [5391c2fe-0842-4421-92e9-0b4baaf14e39] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.0151178s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-vmxwr" [b536aafa-559a-4e32-abe5-73b1e463916f] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0134123s
addons_test.go:340: (dbg) Run:  kubectl --context addons-611800 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-611800 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-611800 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (16.2032933s)
addons_test.go:359: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-611800 ip
addons_test.go:359: (dbg) Done: out/minikube-windows-amd64.exe -p addons-611800 ip: (2.3283388s)
addons_test.go:364: expected stderr to be -empty- but got: *"W0229 00:49:56.018516   10608 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube5\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n"* .  args "out/minikube-windows-amd64.exe -p addons-611800 ip"
2024/02/29 00:49:58 [DEBUG] GET http://172.19.6.238:5000
addons_test.go:388: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-611800 addons disable registry --alsologtostderr -v=1
addons_test.go:388: (dbg) Done: out/minikube-windows-amd64.exe -p addons-611800 addons disable registry --alsologtostderr -v=1: (16.0006082s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-611800 -n addons-611800
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-611800 -n addons-611800: (12.3916606s)
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-611800 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p addons-611800 logs -n 25: (9.1696959s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-695400 | minikube5\jenkins | v1.32.0 | 29 Feb 24 00:42 UTC |                     |
	|         | -p download-only-695400                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                                                                |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube5\jenkins | v1.32.0 | 29 Feb 24 00:42 UTC | 29 Feb 24 00:42 UTC |
	| delete  | -p download-only-695400                                                                     | download-only-695400 | minikube5\jenkins | v1.32.0 | 29 Feb 24 00:42 UTC | 29 Feb 24 00:42 UTC |
	| start   | -o=json --download-only                                                                     | download-only-923600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 00:42 UTC |                     |
	|         | -p download-only-923600                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                                                                |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube5\jenkins | v1.32.0 | 29 Feb 24 00:42 UTC | 29 Feb 24 00:42 UTC |
	| delete  | -p download-only-923600                                                                     | download-only-923600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 00:42 UTC | 29 Feb 24 00:42 UTC |
	| start   | -o=json --download-only                                                                     | download-only-189600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 00:42 UTC |                     |
	|         | -p download-only-189600                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                                                           |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube5\jenkins | v1.32.0 | 29 Feb 24 00:43 UTC | 29 Feb 24 00:43 UTC |
	| delete  | -p download-only-189600                                                                     | download-only-189600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 00:43 UTC | 29 Feb 24 00:43 UTC |
	| delete  | -p download-only-695400                                                                     | download-only-695400 | minikube5\jenkins | v1.32.0 | 29 Feb 24 00:43 UTC | 29 Feb 24 00:43 UTC |
	| delete  | -p download-only-923600                                                                     | download-only-923600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 00:43 UTC | 29 Feb 24 00:43 UTC |
	| delete  | -p download-only-189600                                                                     | download-only-189600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 00:43 UTC | 29 Feb 24 00:43 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-085100 | minikube5\jenkins | v1.32.0 | 29 Feb 24 00:43 UTC |                     |
	|         | binary-mirror-085100                                                                        |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |                   |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |                   |         |                     |                     |
	|         | http://127.0.0.1:63880                                                                      |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | -p binary-mirror-085100                                                                     | binary-mirror-085100 | minikube5\jenkins | v1.32.0 | 29 Feb 24 00:43 UTC | 29 Feb 24 00:43 UTC |
	| addons  | enable dashboard -p                                                                         | addons-611800        | minikube5\jenkins | v1.32.0 | 29 Feb 24 00:43 UTC |                     |
	|         | addons-611800                                                                               |                      |                   |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-611800        | minikube5\jenkins | v1.32.0 | 29 Feb 24 00:43 UTC |                     |
	|         | addons-611800                                                                               |                      |                   |         |                     |                     |
	| start   | -p addons-611800 --wait=true                                                                | addons-611800        | minikube5\jenkins | v1.32.0 | 29 Feb 24 00:43 UTC | 29 Feb 24 00:49 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |                   |         |                     |                     |
	|         | --addons=registry                                                                           |                      |                   |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |                   |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |                   |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |                   |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |                   |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |                   |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |                   |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |                   |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |                   |         |                     |                     |
	|         | --addons=yakd --driver=hyperv                                                               |                      |                   |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |                   |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |                   |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |                   |         |                     |                     |
	| ssh     | addons-611800 ssh cat                                                                       | addons-611800        | minikube5\jenkins | v1.32.0 | 29 Feb 24 00:49 UTC | 29 Feb 24 00:49 UTC |
	|         | /opt/local-path-provisioner/pvc-2ac34051-4600-43ad-afd5-2be80059d3d9_default_test-pvc/file1 |                      |                   |         |                     |                     |
	| addons  | addons-611800 addons disable                                                                | addons-611800        | minikube5\jenkins | v1.32.0 | 29 Feb 24 00:49 UTC | 29 Feb 24 00:49 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| ip      | addons-611800 ip                                                                            | addons-611800        | minikube5\jenkins | v1.32.0 | 29 Feb 24 00:49 UTC | 29 Feb 24 00:49 UTC |
	| addons  | addons-611800 addons disable                                                                | addons-611800        | minikube5\jenkins | v1.32.0 | 29 Feb 24 00:49 UTC | 29 Feb 24 00:50 UTC |
	|         | registry --alsologtostderr                                                                  |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| addons  | addons-611800 addons disable                                                                | addons-611800        | minikube5\jenkins | v1.32.0 | 29 Feb 24 00:49 UTC | 29 Feb 24 00:50 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| addons  | addons-611800 addons                                                                        | addons-611800        | minikube5\jenkins | v1.32.0 | 29 Feb 24 00:50 UTC | 29 Feb 24 00:50 UTC |
	|         | disable metrics-server                                                                      |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-611800        | minikube5\jenkins | v1.32.0 | 29 Feb 24 00:50 UTC |                     |
	|         | addons-611800                                                                               |                      |                   |         |                     |                     |
	| addons  | addons-611800 addons                                                                        | addons-611800        | minikube5\jenkins | v1.32.0 | 29 Feb 24 00:50 UTC |                     |
	|         | disable csi-hostpath-driver                                                                 |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 00:43:16
	Running on machine: minikube5
	Binary: Built with gc go1.22.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 00:43:16.079571    3188 out.go:291] Setting OutFile to fd 840 ...
	I0229 00:43:16.080665    3188 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 00:43:16.080665    3188 out.go:304] Setting ErrFile to fd 872...
	I0229 00:43:16.080665    3188 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 00:43:16.100499    3188 out.go:298] Setting JSON to false
	I0229 00:43:16.104057    3188 start.go:129] hostinfo: {"hostname":"minikube5","uptime":263623,"bootTime":1708903772,"procs":193,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0229 00:43:16.104057    3188 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 00:43:16.105286    3188 out.go:177] * [addons-611800] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0229 00:43:16.105895    3188 notify.go:220] Checking for updates...
	I0229 00:43:16.106606    3188 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 00:43:16.107549    3188 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 00:43:16.107549    3188 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0229 00:43:16.108237    3188 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 00:43:16.109222    3188 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 00:43:16.110685    3188 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 00:43:21.260062    3188 out.go:177] * Using the hyperv driver based on user configuration
	I0229 00:43:21.260903    3188 start.go:299] selected driver: hyperv
	I0229 00:43:21.260903    3188 start.go:903] validating driver "hyperv" against <nil>
	I0229 00:43:21.260903    3188 start.go:914] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 00:43:21.308525    3188 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 00:43:21.309684    3188 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 00:43:21.309764    3188 cni.go:84] Creating CNI manager for ""
	I0229 00:43:21.309764    3188 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0229 00:43:21.309764    3188 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0229 00:43:21.309764    3188 start_flags.go:323] config:
	{Name:addons-611800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-611800 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 00:43:21.309764    3188 iso.go:125] acquiring lock: {Name:mk91f2ee29fbed5605669750e8cfa308a1229357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 00:43:21.311001    3188 out.go:177] * Starting control plane node addons-611800 in cluster addons-611800
	I0229 00:43:21.311796    3188 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 00:43:21.311796    3188 preload.go:148] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0229 00:43:21.311796    3188 cache.go:56] Caching tarball of preloaded images
	I0229 00:43:21.312414    3188 preload.go:174] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 00:43:21.312414    3188 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0229 00:43:21.313037    3188 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\config.json ...
	I0229 00:43:21.313037    3188 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\config.json: {Name:mk8ace41f05a7bae1e88b765c9b7d192c2993235 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 00:43:21.314509    3188 start.go:365] acquiring machines lock for addons-611800: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 00:43:21.314662    3188 start.go:369] acquired machines lock for "addons-611800" in 105µs
	I0229 00:43:21.314662    3188 start.go:93] Provisioning new machine with config: &{Name:addons-611800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:addons-611800 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 00:43:21.314662    3188 start.go:125] createHost starting for "" (driver="hyperv")
	I0229 00:43:21.315525    3188 out.go:204] * Creating hyperv VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0229 00:43:21.315525    3188 start.go:159] libmachine.API.Create for "addons-611800" (driver="hyperv")
	I0229 00:43:21.315525    3188 client.go:168] LocalClient.Create starting
	I0229 00:43:21.316297    3188 main.go:141] libmachine: Creating CA: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0229 00:43:21.450942    3188 main.go:141] libmachine: Creating client certificate: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0229 00:43:21.745301    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0229 00:43:23.774437    3188 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0229 00:43:23.774437    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:43:23.774437    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0229 00:43:25.429009    3188 main.go:141] libmachine: [stdout =====>] : False
	
	I0229 00:43:25.429009    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:43:25.429173    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0229 00:43:26.826676    3188 main.go:141] libmachine: [stdout =====>] : True
	
	I0229 00:43:26.826676    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:43:26.827472    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0229 00:43:30.308220    3188 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0229 00:43:30.308970    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:43:30.310912    3188 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0229 00:43:30.736922    3188 main.go:141] libmachine: Creating SSH key...
	I0229 00:43:30.855812    3188 main.go:141] libmachine: Creating VM...
	I0229 00:43:30.855812    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0229 00:43:33.541665    3188 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0229 00:43:33.541746    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:43:33.541746    3188 main.go:141] libmachine: Using switch "Default Switch"
	I0229 00:43:33.541851    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0229 00:43:35.207331    3188 main.go:141] libmachine: [stdout =====>] : True
	
	I0229 00:43:35.207331    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:43:35.207331    3188 main.go:141] libmachine: Creating VHD
	I0229 00:43:35.207331    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-611800\fixed.vhd' -SizeBytes 10MB -Fixed
	I0229 00:43:38.832252    3188 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-611800\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 2FEE4BB6-0331-4795-AB31-77ABDBDAE018
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0229 00:43:38.832252    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:43:38.832252    3188 main.go:141] libmachine: Writing magic tar header
	I0229 00:43:38.832252    3188 main.go:141] libmachine: Writing SSH key tar header
	I0229 00:43:38.841351    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-611800\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-611800\disk.vhd' -VHDType Dynamic -DeleteSource
	I0229 00:43:41.889750    3188 main.go:141] libmachine: [stdout =====>] : 
	I0229 00:43:41.890144    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:43:41.890144    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-611800\disk.vhd' -SizeBytes 20000MB
	I0229 00:43:44.300261    3188 main.go:141] libmachine: [stdout =====>] : 
	I0229 00:43:44.300261    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:43:44.300509    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM addons-611800 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-611800' -SwitchName 'Default Switch' -MemoryStartupBytes 4000MB
	I0229 00:43:47.650026    3188 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	addons-611800 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0229 00:43:47.650614    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:43:47.650694    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName addons-611800 -DynamicMemoryEnabled $false
	I0229 00:43:49.748699    3188 main.go:141] libmachine: [stdout =====>] : 
	I0229 00:43:49.748957    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:43:49.748957    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor addons-611800 -Count 2
	I0229 00:43:51.822579    3188 main.go:141] libmachine: [stdout =====>] : 
	I0229 00:43:51.822631    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:43:51.822631    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName addons-611800 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-611800\boot2docker.iso'
	I0229 00:43:54.277019    3188 main.go:141] libmachine: [stdout =====>] : 
	I0229 00:43:54.277019    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:43:54.277765    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName addons-611800 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-611800\disk.vhd'
	I0229 00:43:56.773923    3188 main.go:141] libmachine: [stdout =====>] : 
	I0229 00:43:56.773923    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:43:56.773923    3188 main.go:141] libmachine: Starting VM...
	I0229 00:43:56.774297    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM addons-611800
	I0229 00:43:59.523995    3188 main.go:141] libmachine: [stdout =====>] : 
	I0229 00:43:59.524936    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:43:59.524936    3188 main.go:141] libmachine: Waiting for host to start...
	I0229 00:43:59.524936    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:44:01.615361    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:44:01.615361    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:44:01.615361    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-611800 ).networkadapters[0]).ipaddresses[0]
	I0229 00:44:03.964380    3188 main.go:141] libmachine: [stdout =====>] : 
	I0229 00:44:03.964380    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:44:04.979453    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:44:07.080906    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:44:07.081862    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:44:07.081980    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-611800 ).networkadapters[0]).ipaddresses[0]
	I0229 00:44:09.433106    3188 main.go:141] libmachine: [stdout =====>] : 
	I0229 00:44:09.433106    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:44:10.439241    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:44:12.528501    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:44:12.528501    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:44:12.528501    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-611800 ).networkadapters[0]).ipaddresses[0]
	I0229 00:44:14.860098    3188 main.go:141] libmachine: [stdout =====>] : 
	I0229 00:44:14.860098    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:44:15.870979    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:44:17.925087    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:44:17.925223    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:44:17.925223    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-611800 ).networkadapters[0]).ipaddresses[0]
	I0229 00:44:20.259514    3188 main.go:141] libmachine: [stdout =====>] : 
	I0229 00:44:20.259798    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:44:21.262175    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:44:23.329770    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:44:23.329770    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:44:23.329770    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-611800 ).networkadapters[0]).ipaddresses[0]
	I0229 00:44:25.750635    3188 main.go:141] libmachine: [stdout =====>] : 172.19.6.238
	
	I0229 00:44:25.750635    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:44:25.750751    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:44:27.784000    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:44:27.784256    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:44:27.784256    3188 machine.go:88] provisioning docker machine ...
	I0229 00:44:27.784414    3188 buildroot.go:166] provisioning hostname "addons-611800"
	I0229 00:44:27.784414    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:44:29.821472    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:44:29.821622    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:44:29.821622    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-611800 ).networkadapters[0]).ipaddresses[0]
	I0229 00:44:32.225485    3188 main.go:141] libmachine: [stdout =====>] : 172.19.6.238
	
	I0229 00:44:32.225485    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:44:32.230389    3188 main.go:141] libmachine: Using SSH client type: native
	I0229 00:44:32.239924    3188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.6.238 22 <nil> <nil>}
	I0229 00:44:32.239924    3188 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-611800 && echo "addons-611800" | sudo tee /etc/hostname
	I0229 00:44:32.415152    3188 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-611800
	
	I0229 00:44:32.415219    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:44:34.461448    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:44:34.461583    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:44:34.461661    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-611800 ).networkadapters[0]).ipaddresses[0]
	I0229 00:44:36.884594    3188 main.go:141] libmachine: [stdout =====>] : 172.19.6.238
	
	I0229 00:44:36.884594    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:44:36.890554    3188 main.go:141] libmachine: Using SSH client type: native
	I0229 00:44:36.891340    3188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.6.238 22 <nil> <nil>}
	I0229 00:44:36.891340    3188 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-611800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-611800/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-611800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 00:44:37.050367    3188 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 00:44:37.050367    3188 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0229 00:44:37.050520    3188 buildroot.go:174] setting up certificates
	I0229 00:44:37.050520    3188 provision.go:83] configureAuth start
	I0229 00:44:37.050653    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:44:39.076961    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:44:39.077548    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:44:39.077636    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-611800 ).networkadapters[0]).ipaddresses[0]
	I0229 00:44:41.499709    3188 main.go:141] libmachine: [stdout =====>] : 172.19.6.238
	
	I0229 00:44:41.499709    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:44:41.500181    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:44:43.494155    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:44:43.494234    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:44:43.494307    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-611800 ).networkadapters[0]).ipaddresses[0]
	I0229 00:44:45.930304    3188 main.go:141] libmachine: [stdout =====>] : 172.19.6.238
	
	I0229 00:44:45.930530    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:44:45.930530    3188 provision.go:138] copyHostCerts
	I0229 00:44:45.930530    3188 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0229 00:44:45.931859    3188 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0229 00:44:45.933616    3188 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1675 bytes)
	I0229 00:44:45.934990    3188 provision.go:112] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.addons-611800 san=[172.19.6.238 172.19.6.238 localhost 127.0.0.1 minikube addons-611800]
	I0229 00:44:46.197377    3188 provision.go:172] copyRemoteCerts
	I0229 00:44:46.205360    3188 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 00:44:46.206371    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:44:48.207865    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:44:48.207865    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:44:48.207865    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-611800 ).networkadapters[0]).ipaddresses[0]
	I0229 00:44:50.643888    3188 main.go:141] libmachine: [stdout =====>] : 172.19.6.238
	
	I0229 00:44:50.643888    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:44:50.644059    3188 sshutil.go:53] new ssh client: &{IP:172.19.6.238 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-611800\id_rsa Username:docker}
	I0229 00:44:50.754712    3188 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5490976s)
	I0229 00:44:50.755388    3188 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 00:44:50.808675    3188 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0229 00:44:50.857343    3188 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 00:44:50.909643    3188 provision.go:86] duration metric: configureAuth took 13.8583479s
	I0229 00:44:50.909643    3188 buildroot.go:189] setting minikube options for container-runtime
	I0229 00:44:50.910639    3188 config.go:182] Loaded profile config "addons-611800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 00:44:50.910639    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:44:52.942568    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:44:52.942568    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:44:52.943140    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-611800 ).networkadapters[0]).ipaddresses[0]
	I0229 00:44:55.362510    3188 main.go:141] libmachine: [stdout =====>] : 172.19.6.238
	
	I0229 00:44:55.362510    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:44:55.366858    3188 main.go:141] libmachine: Using SSH client type: native
	I0229 00:44:55.367448    3188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.6.238 22 <nil> <nil>}
	I0229 00:44:55.367448    3188 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 00:44:55.507877    3188 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0229 00:44:55.507877    3188 buildroot.go:70] root file system type: tmpfs
	I0229 00:44:55.507877    3188 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 00:44:55.507877    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:44:57.525054    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:44:57.525097    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:44:57.525097    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-611800 ).networkadapters[0]).ipaddresses[0]
	I0229 00:44:59.952514    3188 main.go:141] libmachine: [stdout =====>] : 172.19.6.238
	
	I0229 00:44:59.953263    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:44:59.958131    3188 main.go:141] libmachine: Using SSH client type: native
	I0229 00:44:59.958472    3188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.6.238 22 <nil> <nil>}
	I0229 00:44:59.958472    3188 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 00:45:00.122617    3188 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 00:45:00.122748    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:45:02.148615    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:45:02.148615    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:45:02.148720    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-611800 ).networkadapters[0]).ipaddresses[0]
	I0229 00:45:04.583913    3188 main.go:141] libmachine: [stdout =====>] : 172.19.6.238
	
	I0229 00:45:04.583913    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:45:04.588257    3188 main.go:141] libmachine: Using SSH client type: native
	I0229 00:45:04.588455    3188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.6.238 22 <nil> <nil>}
	I0229 00:45:04.588455    3188 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 00:45:05.670841    3188 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0229 00:45:05.671019    3188 machine.go:91] provisioned docker machine in 37.8846446s
	I0229 00:45:05.671019    3188 client.go:171] LocalClient.Create took 1m44.3496667s
	I0229 00:45:05.671101    3188 start.go:167] duration metric: libmachine.API.Create for "addons-611800" took 1m44.3496667s
	I0229 00:45:05.671203    3188 start.go:300] post-start starting for "addons-611800" (driver="hyperv")
	I0229 00:45:05.671227    3188 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 00:45:05.681358    3188 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 00:45:05.681484    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:45:07.663306    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:45:07.663465    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:45:07.663465    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-611800 ).networkadapters[0]).ipaddresses[0]
	I0229 00:45:10.103847    3188 main.go:141] libmachine: [stdout =====>] : 172.19.6.238
	
	I0229 00:45:10.104800    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:45:10.104800    3188 sshutil.go:53] new ssh client: &{IP:172.19.6.238 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-611800\id_rsa Username:docker}
	I0229 00:45:10.217856    3188 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5361186s)
	I0229 00:45:10.227091    3188 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 00:45:10.237250    3188 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 00:45:10.237250    3188 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0229 00:45:10.237843    3188 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0229 00:45:10.237843    3188 start.go:303] post-start completed in 4.5663607s
	I0229 00:45:10.240631    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:45:12.285751    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:45:12.285751    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:45:12.285751    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-611800 ).networkadapters[0]).ipaddresses[0]
	I0229 00:45:14.685757    3188 main.go:141] libmachine: [stdout =====>] : 172.19.6.238
	
	I0229 00:45:14.685757    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:45:14.685757    3188 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\config.json ...
	I0229 00:45:14.688285    3188 start.go:128] duration metric: createHost completed in 1m53.367292s
	I0229 00:45:14.688447    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:45:16.723029    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:45:16.723029    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:45:16.723029    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-611800 ).networkadapters[0]).ipaddresses[0]
	I0229 00:45:19.166837    3188 main.go:141] libmachine: [stdout =====>] : 172.19.6.238
	
	I0229 00:45:19.166880    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:45:19.171820    3188 main.go:141] libmachine: Using SSH client type: native
	I0229 00:45:19.171899    3188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.6.238 22 <nil> <nil>}
	I0229 00:45:19.171899    3188 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 00:45:19.309056    3188 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709167519.479275515
	
	I0229 00:45:19.309056    3188 fix.go:206] guest clock: 1709167519.479275515
	I0229 00:45:19.309056    3188 fix.go:219] Guest: 2024-02-29 00:45:19.479275515 +0000 UTC Remote: 2024-02-29 00:45:14.6883684 +0000 UTC m=+118.750815901 (delta=4.790907115s)
	I0229 00:45:19.309056    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:45:21.332595    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:45:21.332975    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:45:21.332975    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-611800 ).networkadapters[0]).ipaddresses[0]
	I0229 00:45:23.714556    3188 main.go:141] libmachine: [stdout =====>] : 172.19.6.238
	
	I0229 00:45:23.714609    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:45:23.718263    3188 main.go:141] libmachine: Using SSH client type: native
	I0229 00:45:23.718263    3188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.6.238 22 <nil> <nil>}
	I0229 00:45:23.718792    3188 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709167519
	I0229 00:45:23.864591    3188 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Feb 29 00:45:19 UTC 2024
	
	I0229 00:45:23.864651    3188 fix.go:226] clock set: Thu Feb 29 00:45:19 UTC 2024
	 (err=<nil>)
	I0229 00:45:23.864708    3188 start.go:83] releasing machines lock for "addons-611800", held for 2m2.5431453s
	I0229 00:45:23.865101    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:45:25.875791    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:45:25.875791    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:45:25.876135    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-611800 ).networkadapters[0]).ipaddresses[0]
	I0229 00:45:28.288799    3188 main.go:141] libmachine: [stdout =====>] : 172.19.6.238
	
	I0229 00:45:28.288799    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:45:28.293148    3188 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 00:45:28.293336    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:45:28.301714    3188 ssh_runner.go:195] Run: cat /version.json
	I0229 00:45:28.301714    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:45:30.346022    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:45:30.346022    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:45:30.346022    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-611800 ).networkadapters[0]).ipaddresses[0]
	I0229 00:45:30.359633    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:45:30.359633    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:45:30.359633    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-611800 ).networkadapters[0]).ipaddresses[0]
	I0229 00:45:32.839837    3188 main.go:141] libmachine: [stdout =====>] : 172.19.6.238
	
	I0229 00:45:32.840038    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:45:32.840038    3188 sshutil.go:53] new ssh client: &{IP:172.19.6.238 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-611800\id_rsa Username:docker}
	I0229 00:45:32.862070    3188 main.go:141] libmachine: [stdout =====>] : 172.19.6.238
	
	I0229 00:45:32.862170    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:45:32.862425    3188 sshutil.go:53] new ssh client: &{IP:172.19.6.238 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-611800\id_rsa Username:docker}
	I0229 00:45:33.187154    3188 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.8937326s)
	I0229 00:45:33.187154    3188 ssh_runner.go:235] Completed: cat /version.json: (4.885167s)
	I0229 00:45:33.196225    3188 ssh_runner.go:195] Run: systemctl --version
	I0229 00:45:33.217591    3188 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 00:45:33.227508    3188 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 00:45:33.238703    3188 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 00:45:33.271385    3188 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 00:45:33.271385    3188 start.go:475] detecting cgroup driver to use...
	I0229 00:45:33.271984    3188 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 00:45:33.313749    3188 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0229 00:45:33.348975    3188 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 00:45:33.369685    3188 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 00:45:33.380382    3188 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 00:45:33.415590    3188 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 00:45:33.446686    3188 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 00:45:33.475449    3188 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 00:45:33.505280    3188 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 00:45:33.534884    3188 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 00:45:33.564703    3188 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 00:45:33.591996    3188 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 00:45:33.621102    3188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 00:45:33.830546    3188 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 00:45:33.864435    3188 start.go:475] detecting cgroup driver to use...
	I0229 00:45:33.875763    3188 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 00:45:33.909252    3188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 00:45:33.938241    3188 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 00:45:33.977250    3188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 00:45:34.025858    3188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 00:45:34.061883    3188 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 00:45:34.117458    3188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 00:45:34.141451    3188 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 00:45:34.190388    3188 ssh_runner.go:195] Run: which cri-dockerd
	I0229 00:45:34.206293    3188 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 00:45:34.225682    3188 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0229 00:45:34.267758    3188 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 00:45:34.470057    3188 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 00:45:34.673913    3188 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 00:45:34.673913    3188 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 00:45:34.716429    3188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 00:45:34.915672    3188 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 00:45:36.452134    3188 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5363754s)
	I0229 00:45:36.465177    3188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0229 00:45:36.499849    3188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 00:45:36.534982    3188 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0229 00:45:36.748744    3188 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0229 00:45:36.949430    3188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 00:45:37.150944    3188 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0229 00:45:37.190930    3188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 00:45:37.230378    3188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 00:45:37.440727    3188 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0229 00:45:37.548499    3188 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0229 00:45:37.558409    3188 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0229 00:45:37.573154    3188 start.go:543] Will wait 60s for crictl version
	I0229 00:45:37.582285    3188 ssh_runner.go:195] Run: which crictl
	I0229 00:45:37.598595    3188 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 00:45:37.673172    3188 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0229 00:45:37.683851    3188 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 00:45:37.725559    3188 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 00:45:37.760236    3188 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0229 00:45:37.760422    3188 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0229 00:45:37.764647    3188 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0229 00:45:37.764746    3188 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0229 00:45:37.764746    3188 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0229 00:45:37.764746    3188 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:a6:a3:c1 Flags:up|broadcast|multicast|running}
	I0229 00:45:37.766958    3188 ip.go:210] interface addr: fe80::fc78:4865:5cac:d448/64
	I0229 00:45:37.766958    3188 ip.go:210] interface addr: 172.19.0.1/20
	I0229 00:45:37.777718    3188 ssh_runner.go:195] Run: grep 172.19.0.1	host.minikube.internal$ /etc/hosts
	I0229 00:45:37.783184    3188 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 00:45:37.806690    3188 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 00:45:37.813076    3188 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 00:45:37.838155    3188 docker.go:685] Got preloaded images: 
	I0229 00:45:37.838155    3188 docker.go:691] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I0229 00:45:37.848798    3188 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0229 00:45:37.877374    3188 ssh_runner.go:195] Run: which lz4
	I0229 00:45:37.893517    3188 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 00:45:37.901313    3188 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 00:45:37.901510    3188 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I0229 00:45:39.930752    3188 docker.go:649] Took 2.045543 seconds to copy over tarball
	I0229 00:45:39.941989    3188 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 00:45:48.130148    3188 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.1877009s)
	I0229 00:45:48.130148    3188 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 00:45:48.202853    3188 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0229 00:45:48.225774    3188 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0229 00:45:48.273790    3188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 00:45:48.456283    3188 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 00:45:52.719955    3188 ssh_runner.go:235] Completed: sudo systemctl restart docker: (4.2634339s)
	I0229 00:45:52.727186    3188 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 00:45:52.752171    3188 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0229 00:45:52.752275    3188 cache_images.go:84] Images are preloaded, skipping loading
	I0229 00:45:52.761224    3188 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0229 00:45:52.796313    3188 cni.go:84] Creating CNI manager for ""
	I0229 00:45:52.796677    3188 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0229 00:45:52.796677    3188 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 00:45:52.796791    3188 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.6.238 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-611800 NodeName:addons-611800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.6.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.6.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 00:45:52.797115    3188 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.6.238
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-611800"
	  kubeletExtraArgs:
	    node-ip: 172.19.6.238
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.6.238"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 00:45:52.797377    3188 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-611800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.6.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-611800 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 00:45:52.807818    3188 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 00:45:52.830943    3188 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 00:45:52.839896    3188 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 00:45:52.857514    3188 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I0229 00:45:52.891773    3188 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 00:45:52.923200    3188 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0229 00:45:52.966440    3188 ssh_runner.go:195] Run: grep 172.19.6.238	control-plane.minikube.internal$ /etc/hosts
	I0229 00:45:52.974151    3188 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.6.238	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 00:45:52.995777    3188 certs.go:56] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800 for IP: 172.19.6.238
	I0229 00:45:52.995777    3188 certs.go:190] acquiring lock for shared ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 00:45:52.996576    3188 certs.go:204] generating minikubeCA CA: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0229 00:45:53.357419    3188 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt ...
	I0229 00:45:53.358419    3188 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt: {Name:mkecc83abf7dbcd2f2b0fd63bac36f2a7fe554cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 00:45:53.359641    3188 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key ...
	I0229 00:45:53.359641    3188 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key: {Name:mk56e2872d5c5070a04729e59e76e7398d15f15d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 00:45:53.360614    3188 certs.go:204] generating proxyClientCA CA: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0229 00:45:53.620525    3188 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt ...
	I0229 00:45:53.620525    3188 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt: {Name:mkfcb9723e08b8d76b8a2e73084c13f930548396 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 00:45:53.621570    3188 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key ...
	I0229 00:45:53.621570    3188 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key: {Name:mkd23bfd48ce10457a367dee40c81533c5cc7b5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 00:45:53.623843    3188 certs.go:319] generating minikube-user signed cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.key
	I0229 00:45:53.624332    3188 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt with IP's: []
	I0229 00:45:53.810931    3188 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt ...
	I0229 00:45:53.810931    3188 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: {Name:mk68574cf3e6cfa31605910be1d6be0c8b99f027 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 00:45:53.812163    3188 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.key ...
	I0229 00:45:53.812163    3188 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.key: {Name:mkc7c5af2875726e0614f672ed3e13987cc865f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 00:45:53.813394    3188 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\apiserver.key.c7136a83
	I0229 00:45:53.813872    3188 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\apiserver.crt.c7136a83 with IP's: [172.19.6.238 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 00:45:54.059619    3188 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\apiserver.crt.c7136a83 ...
	I0229 00:45:54.059619    3188 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\apiserver.crt.c7136a83: {Name:mkab8f48e71edbfc6bc83f2bfe08047782928f58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 00:45:54.062245    3188 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\apiserver.key.c7136a83 ...
	I0229 00:45:54.062245    3188 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\apiserver.key.c7136a83: {Name:mk646aefa40c79abc5063673215660831b692c2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 00:45:54.063314    3188 certs.go:337] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\apiserver.crt.c7136a83 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\apiserver.crt
	I0229 00:45:54.075313    3188 certs.go:341] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\apiserver.key.c7136a83 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\apiserver.key
	I0229 00:45:54.076312    3188 certs.go:319] generating aggregator signed cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\proxy-client.key
	I0229 00:45:54.076508    3188 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\proxy-client.crt with IP's: []
	I0229 00:45:54.344835    3188 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\proxy-client.crt ...
	I0229 00:45:54.344835    3188 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\proxy-client.crt: {Name:mk3deddbd9022c384c2c36e93cfe1d05637735f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 00:45:54.345846    3188 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\proxy-client.key ...
	I0229 00:45:54.345846    3188 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\proxy-client.key: {Name:mka5311fb26d0961cdb89b09b7995ba67af70296 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 00:45:54.358255    3188 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0229 00:45:54.358418    3188 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0229 00:45:54.358418    3188 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0229 00:45:54.358418    3188 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0229 00:45:54.359561    3188 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 00:45:54.408492    3188 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 00:45:54.456053    3188 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 00:45:54.505141    3188 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 00:45:54.551799    3188 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 00:45:54.598974    3188 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 00:45:54.643734    3188 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 00:45:54.689049    3188 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 00:45:54.733726    3188 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 00:45:54.778493    3188 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 00:45:54.823558    3188 ssh_runner.go:195] Run: openssl version
	I0229 00:45:54.842405    3188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 00:45:54.874679    3188 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 00:45:54.882035    3188 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 00:45 /usr/share/ca-certificates/minikubeCA.pem
	I0229 00:45:54.891010    3188 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 00:45:54.908336    3188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 00:45:54.939703    3188 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 00:45:54.948313    3188 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 00:45:54.948413    3188 kubeadm.go:404] StartCluster: {Name:addons-611800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.28.4 ClusterName:addons-611800 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.19.6.238 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 00:45:54.956116    3188 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 00:45:54.990012    3188 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 00:45:55.018071    3188 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 00:45:55.045915    3188 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 00:45:55.064277    3188 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 00:45:55.064447    3188 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0229 00:45:55.151486    3188 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0229 00:45:55.151486    3188 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 00:45:55.372346    3188 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 00:45:55.372869    3188 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 00:45:55.373087    3188 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 00:45:55.782937    3188 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 00:45:55.784203    3188 out.go:204]   - Generating certificates and keys ...
	I0229 00:45:55.784623    3188 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 00:45:55.784926    3188 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 00:45:55.879931    3188 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 00:45:56.009970    3188 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0229 00:45:56.139136    3188 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0229 00:45:56.313979    3188 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0229 00:45:56.514927    3188 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0229 00:45:56.514927    3188 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-611800 localhost] and IPs [172.19.6.238 127.0.0.1 ::1]
	I0229 00:45:56.729749    3188 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0229 00:45:56.729749    3188 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-611800 localhost] and IPs [172.19.6.238 127.0.0.1 ::1]
	I0229 00:45:56.833841    3188 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 00:45:56.920888    3188 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 00:45:57.080663    3188 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0229 00:45:57.080911    3188 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 00:45:57.203519    3188 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 00:45:57.459345    3188 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 00:45:57.618021    3188 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 00:45:58.060581    3188 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 00:45:58.064226    3188 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 00:45:58.067722    3188 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 00:45:58.068452    3188 out.go:204]   - Booting up control plane ...
	I0229 00:45:58.068789    3188 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 00:45:58.069680    3188 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 00:45:58.070691    3188 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 00:45:58.096781    3188 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 00:45:58.097248    3188 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 00:45:58.097248    3188 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 00:45:58.296535    3188 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 00:46:05.298776    3188 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.002946 seconds
	I0229 00:46:05.299346    3188 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0229 00:46:05.318794    3188 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0229 00:46:05.854393    3188 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0229 00:46:05.854871    3188 kubeadm.go:322] [mark-control-plane] Marking the node addons-611800 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0229 00:46:06.373533    3188 kubeadm.go:322] [bootstrap-token] Using token: se6tbn.b1w5hhi2rban1pup
	I0229 00:46:06.374131    3188 out.go:204]   - Configuring RBAC rules ...
	I0229 00:46:06.374131    3188 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0229 00:46:06.384155    3188 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0229 00:46:06.396688    3188 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0229 00:46:06.405820    3188 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0229 00:46:06.414477    3188 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0229 00:46:06.419672    3188 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0229 00:46:06.444017    3188 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0229 00:46:06.829112    3188 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0229 00:46:06.875790    3188 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0229 00:46:06.877071    3188 kubeadm.go:322] 
	I0229 00:46:06.877942    3188 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0229 00:46:06.877942    3188 kubeadm.go:322] 
	I0229 00:46:06.878149    3188 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0229 00:46:06.878149    3188 kubeadm.go:322] 
	I0229 00:46:06.878149    3188 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0229 00:46:06.878149    3188 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0229 00:46:06.878149    3188 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0229 00:46:06.878149    3188 kubeadm.go:322] 
	I0229 00:46:06.878695    3188 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0229 00:46:06.878777    3188 kubeadm.go:322] 
	I0229 00:46:06.878982    3188 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0229 00:46:06.879053    3188 kubeadm.go:322] 
	I0229 00:46:06.879377    3188 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0229 00:46:06.879654    3188 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0229 00:46:06.880064    3188 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0229 00:46:06.880147    3188 kubeadm.go:322] 
	I0229 00:46:06.880436    3188 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0229 00:46:06.880848    3188 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0229 00:46:06.880917    3188 kubeadm.go:322] 
	I0229 00:46:06.880997    3188 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token se6tbn.b1w5hhi2rban1pup \
	I0229 00:46:06.880997    3188 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:9c722bf1323b6c4442b9327af3863f0d7e41785d89e27c3b473d4929b028e022 \
	I0229 00:46:06.880997    3188 kubeadm.go:322] 	--control-plane 
	I0229 00:46:06.880997    3188 kubeadm.go:322] 
	I0229 00:46:06.881616    3188 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0229 00:46:06.881680    3188 kubeadm.go:322] 
	I0229 00:46:06.881949    3188 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token se6tbn.b1w5hhi2rban1pup \
	I0229 00:46:06.882295    3188 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:9c722bf1323b6c4442b9327af3863f0d7e41785d89e27c3b473d4929b028e022 
	I0229 00:46:06.886226    3188 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 00:46:06.886226    3188 cni.go:84] Creating CNI manager for ""
	I0229 00:46:06.886226    3188 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0229 00:46:06.886931    3188 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 00:46:06.901632    3188 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 00:46:06.930557    3188 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 00:46:06.990492    3188 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 00:46:07.001845    3188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 00:46:07.004837    3188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61 minikube.k8s.io/name=addons-611800 minikube.k8s.io/updated_at=2024_02_29T00_46_06_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 00:46:07.024346    3188 ops.go:34] apiserver oom_adj: -16
	I0229 00:46:07.347333    3188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 00:46:07.852597    3188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 00:46:08.358365    3188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 00:46:08.860356    3188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 00:46:09.361403    3188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 00:46:09.848141    3188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 00:46:10.355877    3188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 00:46:10.858078    3188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 00:46:11.362965    3188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 00:46:11.845868    3188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 00:46:12.347093    3188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 00:46:12.852102    3188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 00:46:13.353997    3188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 00:46:13.854449    3188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 00:46:14.361799    3188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 00:46:14.860060    3188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 00:46:15.361472    3188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 00:46:15.849777    3188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 00:46:16.351459    3188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 00:46:16.847984    3188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 00:46:17.354584    3188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 00:46:17.856444    3188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 00:46:18.356613    3188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 00:46:18.858331    3188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 00:46:19.350999    3188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 00:46:19.519035    3188 kubeadm.go:1088] duration metric: took 12.5278416s to wait for elevateKubeSystemPrivileges.
	I0229 00:46:19.519161    3188 kubeadm.go:406] StartCluster complete in 24.5693154s
	I0229 00:46:19.519225    3188 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 00:46:19.519635    3188 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 00:46:19.520584    3188 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 00:46:19.522137    3188 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 00:46:19.522515    3188 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0229 00:46:19.522763    3188 addons.go:69] Setting helm-tiller=true in profile "addons-611800"
	I0229 00:46:19.522763    3188 addons.go:69] Setting yakd=true in profile "addons-611800"
	I0229 00:46:19.522763    3188 addons.go:69] Setting registry=true in profile "addons-611800"
	I0229 00:46:19.523081    3188 config.go:182] Loaded profile config "addons-611800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 00:46:19.523081    3188 addons.go:234] Setting addon yakd=true in "addons-611800"
	I0229 00:46:19.523081    3188 addons.go:234] Setting addon registry=true in "addons-611800"
	I0229 00:46:19.522763    3188 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-611800"
	I0229 00:46:19.522763    3188 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-611800"
	I0229 00:46:19.523081    3188 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-611800"
	I0229 00:46:19.523081    3188 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-611800"
	I0229 00:46:19.523081    3188 host.go:66] Checking if "addons-611800" exists ...
	I0229 00:46:19.523081    3188 host.go:66] Checking if "addons-611800" exists ...
	I0229 00:46:19.522763    3188 addons.go:69] Setting ingress=true in profile "addons-611800"
	I0229 00:46:19.523081    3188 host.go:66] Checking if "addons-611800" exists ...
	I0229 00:46:19.523081    3188 addons.go:234] Setting addon ingress=true in "addons-611800"
	I0229 00:46:19.522763    3188 addons.go:69] Setting metrics-server=true in profile "addons-611800"
	I0229 00:46:19.523081    3188 addons.go:234] Setting addon metrics-server=true in "addons-611800"
	I0229 00:46:19.523081    3188 host.go:66] Checking if "addons-611800" exists ...
	I0229 00:46:19.522763    3188 addons.go:69] Setting volumesnapshots=true in profile "addons-611800"
	I0229 00:46:19.523081    3188 addons.go:234] Setting addon volumesnapshots=true in "addons-611800"
	I0229 00:46:19.523081    3188 host.go:66] Checking if "addons-611800" exists ...
	I0229 00:46:19.523754    3188 host.go:66] Checking if "addons-611800" exists ...
	I0229 00:46:19.522836    3188 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-611800"
	I0229 00:46:19.523935    3188 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-611800"
	I0229 00:46:19.522836    3188 addons.go:69] Setting cloud-spanner=true in profile "addons-611800"
	I0229 00:46:19.524003    3188 host.go:66] Checking if "addons-611800" exists ...
	I0229 00:46:19.524122    3188 addons.go:234] Setting addon cloud-spanner=true in "addons-611800"
	I0229 00:46:19.524208    3188 host.go:66] Checking if "addons-611800" exists ...
	I0229 00:46:19.522836    3188 addons.go:69] Setting default-storageclass=true in profile "addons-611800"
	I0229 00:46:19.524394    3188 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-611800"
	I0229 00:46:19.522836    3188 addons.go:69] Setting gcp-auth=true in profile "addons-611800"
	I0229 00:46:19.524752    3188 mustload.go:65] Loading cluster: addons-611800
	I0229 00:46:19.522836    3188 addons.go:69] Setting ingress-dns=true in profile "addons-611800"
	I0229 00:46:19.524936    3188 addons.go:234] Setting addon ingress-dns=true in "addons-611800"
	I0229 00:46:19.524936    3188 host.go:66] Checking if "addons-611800" exists ...
	I0229 00:46:19.524936    3188 config.go:182] Loaded profile config "addons-611800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 00:46:19.522836    3188 addons.go:69] Setting inspektor-gadget=true in profile "addons-611800"
	I0229 00:46:19.522942    3188 addons.go:234] Setting addon helm-tiller=true in "addons-611800"
	I0229 00:46:19.522763    3188 addons.go:69] Setting storage-provisioner=true in profile "addons-611800"
	I0229 00:46:19.524936    3188 addons.go:234] Setting addon storage-provisioner=true in "addons-611800"
	I0229 00:46:19.524936    3188 host.go:66] Checking if "addons-611800" exists ...
	I0229 00:46:19.524936    3188 addons.go:234] Setting addon inspektor-gadget=true in "addons-611800"
	I0229 00:46:19.524936    3188 host.go:66] Checking if "addons-611800" exists ...
	I0229 00:46:19.524936    3188 host.go:66] Checking if "addons-611800" exists ...
	I0229 00:46:19.524936    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:46:19.528680    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:46:19.530469    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:46:19.531330    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:46:19.531330    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:46:19.531330    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:46:19.531330    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:46:19.531889    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:46:19.531889    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:46:19.532450    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:46:19.532450    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:46:19.532450    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:46:19.532959    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:46:19.533145    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:46:19.533573    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:46:20.080441    3188 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.19.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0229 00:46:20.275065    3188 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-611800" context rescaled to 1 replicas
	I0229 00:46:20.275065    3188 start.go:223] Will wait 6m0s for node &{Name: IP:172.19.6.238 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 00:46:20.276063    3188 out.go:177] * Verifying Kubernetes components...
	I0229 00:46:20.292075    3188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 00:46:23.405412    3188 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.19.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.3247847s)
	I0229 00:46:23.405412    3188 start.go:929] {"host.minikube.internal": 172.19.0.1} host record injected into CoreDNS's ConfigMap
	I0229 00:46:23.405412    3188 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (3.1131628s)
	I0229 00:46:23.409397    3188 node_ready.go:35] waiting up to 6m0s for node "addons-611800" to be "Ready" ...
	I0229 00:46:23.520928    3188 node_ready.go:49] node "addons-611800" has status "Ready":"True"
	I0229 00:46:23.520928    3188 node_ready.go:38] duration metric: took 111.5243ms waiting for node "addons-611800" to be "Ready" ...
	I0229 00:46:23.520928    3188 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 00:46:23.549808    3188 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-kt79c" in "kube-system" namespace to be "Ready" ...
	I0229 00:46:25.257667    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:46:25.257667    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:46:25.265667    3188 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0229 00:46:25.266672    3188 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0229 00:46:25.266672    3188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0229 00:46:25.266672    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:46:25.306153    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:46:25.307139    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:46:25.311153    3188 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.4
	I0229 00:46:25.329132    3188 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0229 00:46:25.329132    3188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0229 00:46:25.329132    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:46:25.336145    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:46:25.336145    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:46:25.355242    3188 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-611800"
	I0229 00:46:25.355242    3188 host.go:66] Checking if "addons-611800" exists ...
	I0229 00:46:25.356531    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:46:25.627928    3188 pod_ready.go:102] pod "coredns-5dd5756b68-kt79c" in "kube-system" namespace has status "Ready":"False"
	I0229 00:46:25.787482    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:46:25.787482    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:46:25.788409    3188 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0229 00:46:25.789409    3188 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0229 00:46:25.789409    3188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0229 00:46:25.789409    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:46:25.967533    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:46:25.967533    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:46:25.969531    3188 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0229 00:46:25.978788    3188 out.go:177]   - Using image docker.io/registry:2.8.3
	I0229 00:46:25.979742    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:46:25.990731    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:46:25.982748    3188 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0229 00:46:25.998744    3188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0229 00:46:25.999456    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:46:25.993725    3188 addons.go:234] Setting addon default-storageclass=true in "addons-611800"
	I0229 00:46:26.000799    3188 host.go:66] Checking if "addons-611800" exists ...
	I0229 00:46:26.002853    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:46:26.116435    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:46:26.116435    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:46:26.116435    3188 host.go:66] Checking if "addons-611800" exists ...
	I0229 00:46:26.144074    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:46:26.145068    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:46:26.147073    3188 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.14
	I0229 00:46:26.148070    3188 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0229 00:46:26.148070    3188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0229 00:46:26.148070    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:46:26.167404    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:46:26.167404    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:46:26.169091    3188 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231226-1a7112e06
	I0229 00:46:26.169743    3188 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.6
	I0229 00:46:26.170777    3188 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231226-1a7112e06
	I0229 00:46:26.171663    3188 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0229 00:46:26.171663    3188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0229 00:46:26.171663    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:46:26.194087    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:46:26.194087    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:46:26.197107    3188 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0229 00:46:26.199084    3188 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0229 00:46:26.199084    3188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0229 00:46:26.199084    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:46:26.194087    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:46:26.202080    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:46:26.210110    3188 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0229 00:46:26.212094    3188 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0229 00:46:26.222231    3188 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0229 00:46:26.232226    3188 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0229 00:46:26.235200    3188 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0229 00:46:26.237206    3188 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0229 00:46:26.245202    3188 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0229 00:46:26.246231    3188 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0229 00:46:26.248243    3188 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0229 00:46:26.248243    3188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0229 00:46:26.248243    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:46:26.280223    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:46:26.280223    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:46:26.304147    3188 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0229 00:46:26.314340    3188 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0229 00:46:26.314340    3188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0229 00:46:26.314952    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:46:26.559258    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:46:26.559258    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:46:26.562262    3188 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0229 00:46:26.563258    3188 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0229 00:46:26.563258    3188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0229 00:46:26.563258    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:46:26.742060    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:46:26.742060    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:46:26.756058    3188 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.25.1
	I0229 00:46:26.840679    3188 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0229 00:46:26.840753    3188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0229 00:46:26.840753    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:46:26.964716    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:46:26.964716    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:46:26.965715    3188 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 00:46:26.966720    3188 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 00:46:26.966720    3188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 00:46:26.966720    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:46:27.912278    3188 pod_ready.go:102] pod "coredns-5dd5756b68-kt79c" in "kube-system" namespace has status "Ready":"False"
	I0229 00:46:30.125416    3188 pod_ready.go:102] pod "coredns-5dd5756b68-kt79c" in "kube-system" namespace has status "Ready":"False"
	I0229 00:46:30.644663    3188 pod_ready.go:92] pod "coredns-5dd5756b68-kt79c" in "kube-system" namespace has status "Ready":"True"
	I0229 00:46:30.644663    3188 pod_ready.go:81] duration metric: took 7.0944577s waiting for pod "coredns-5dd5756b68-kt79c" in "kube-system" namespace to be "Ready" ...
	I0229 00:46:30.644663    3188 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-611800" in "kube-system" namespace to be "Ready" ...
	I0229 00:46:30.763430    3188 pod_ready.go:92] pod "etcd-addons-611800" in "kube-system" namespace has status "Ready":"True"
	I0229 00:46:30.764460    3188 pod_ready.go:81] duration metric: took 119.7899ms waiting for pod "etcd-addons-611800" in "kube-system" namespace to be "Ready" ...
	I0229 00:46:30.764460    3188 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-611800" in "kube-system" namespace to be "Ready" ...
	I0229 00:46:30.793646    3188 pod_ready.go:92] pod "kube-apiserver-addons-611800" in "kube-system" namespace has status "Ready":"True"
	I0229 00:46:30.793646    3188 pod_ready.go:81] duration metric: took 29.1842ms waiting for pod "kube-apiserver-addons-611800" in "kube-system" namespace to be "Ready" ...
	I0229 00:46:30.793646    3188 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-611800" in "kube-system" namespace to be "Ready" ...
	I0229 00:46:30.894188    3188 pod_ready.go:92] pod "kube-controller-manager-addons-611800" in "kube-system" namespace has status "Ready":"True"
	I0229 00:46:30.894188    3188 pod_ready.go:81] duration metric: took 100.5369ms waiting for pod "kube-controller-manager-addons-611800" in "kube-system" namespace to be "Ready" ...
	I0229 00:46:30.894188    3188 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qf92m" in "kube-system" namespace to be "Ready" ...
	I0229 00:46:30.935205    3188 pod_ready.go:92] pod "kube-proxy-qf92m" in "kube-system" namespace has status "Ready":"True"
	I0229 00:46:30.935205    3188 pod_ready.go:81] duration metric: took 41.0149ms waiting for pod "kube-proxy-qf92m" in "kube-system" namespace to be "Ready" ...
	I0229 00:46:30.935205    3188 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-611800" in "kube-system" namespace to be "Ready" ...
	I0229 00:46:30.984180    3188 pod_ready.go:92] pod "kube-scheduler-addons-611800" in "kube-system" namespace has status "Ready":"True"
	I0229 00:46:30.984180    3188 pod_ready.go:81] duration metric: took 48.9713ms waiting for pod "kube-scheduler-addons-611800" in "kube-system" namespace to be "Ready" ...
	I0229 00:46:30.984180    3188 pod_ready.go:38] duration metric: took 7.4628337s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 00:46:30.984180    3188 api_server.go:52] waiting for apiserver process to appear ...
	I0229 00:46:31.006819    3188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 00:46:31.145784    3188 api_server.go:72] duration metric: took 10.8701103s to wait for apiserver process to appear ...
	I0229 00:46:31.145784    3188 api_server.go:88] waiting for apiserver healthz status ...
	I0229 00:46:31.145784    3188 api_server.go:253] Checking apiserver healthz at https://172.19.6.238:8443/healthz ...
	I0229 00:46:31.171769    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:46:31.171769    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:46:31.173785    3188 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0229 00:46:31.175777    3188 out.go:177]   - Using image docker.io/busybox:stable
	I0229 00:46:31.188795    3188 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0229 00:46:31.188795    3188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0229 00:46:31.177773    3188 api_server.go:279] https://172.19.6.238:8443/healthz returned 200:
	ok
	I0229 00:46:31.188795    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:46:31.195789    3188 api_server.go:141] control plane version: v1.28.4
	I0229 00:46:31.195789    3188 api_server.go:131] duration metric: took 50.0018ms to wait for apiserver health ...
	I0229 00:46:31.195789    3188 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 00:46:31.223809    3188 system_pods.go:59] 6 kube-system pods found
	I0229 00:46:31.223809    3188 system_pods.go:61] "coredns-5dd5756b68-kt79c" [1b2ac1eb-1b63-40a0-a46c-e5b7971a7490] Running
	I0229 00:46:31.223809    3188 system_pods.go:61] "etcd-addons-611800" [6d347d56-e4ee-4bf4-abff-576887b2c455] Running
	I0229 00:46:31.223809    3188 system_pods.go:61] "kube-apiserver-addons-611800" [33c7b73e-dced-4a16-aff9-acfea2a28d89] Running
	I0229 00:46:31.223809    3188 system_pods.go:61] "kube-controller-manager-addons-611800" [30e7256c-80f8-4601-926a-e8103d8ac387] Running
	I0229 00:46:31.224782    3188 system_pods.go:61] "kube-proxy-qf92m" [55ebb9e8-7434-466b-bcd9-e6747e9be13a] Running
	I0229 00:46:31.224782    3188 system_pods.go:61] "kube-scheduler-addons-611800" [3485499a-ddad-4b23-af62-61725292951d] Running
	I0229 00:46:31.224782    3188 system_pods.go:74] duration metric: took 28.9908ms to wait for pod list to return data ...
	I0229 00:46:31.224782    3188 default_sa.go:34] waiting for default service account to be created ...
	I0229 00:46:31.244782    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:46:31.244782    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:46:31.244782    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-611800 ).networkadapters[0]).ipaddresses[0]
	I0229 00:46:31.327335    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:46:31.327335    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:46:31.327335    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-611800 ).networkadapters[0]).ipaddresses[0]
	I0229 00:46:31.366924    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:46:31.366924    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:46:31.366924    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-611800 ).networkadapters[0]).ipaddresses[0]
	I0229 00:46:31.395232    3188 default_sa.go:45] found service account: "default"
	I0229 00:46:31.395232    3188 default_sa.go:55] duration metric: took 170.4405ms for default service account to be created ...
	I0229 00:46:31.395232    3188 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 00:46:31.625790    3188 system_pods.go:86] 6 kube-system pods found
	I0229 00:46:31.625790    3188 system_pods.go:89] "coredns-5dd5756b68-kt79c" [1b2ac1eb-1b63-40a0-a46c-e5b7971a7490] Running
	I0229 00:46:31.625790    3188 system_pods.go:89] "etcd-addons-611800" [6d347d56-e4ee-4bf4-abff-576887b2c455] Running
	I0229 00:46:31.625973    3188 system_pods.go:89] "kube-apiserver-addons-611800" [33c7b73e-dced-4a16-aff9-acfea2a28d89] Running
	I0229 00:46:31.625973    3188 system_pods.go:89] "kube-controller-manager-addons-611800" [30e7256c-80f8-4601-926a-e8103d8ac387] Running
	I0229 00:46:31.625973    3188 system_pods.go:89] "kube-proxy-qf92m" [55ebb9e8-7434-466b-bcd9-e6747e9be13a] Running
	I0229 00:46:31.625973    3188 system_pods.go:89] "kube-scheduler-addons-611800" [3485499a-ddad-4b23-af62-61725292951d] Running
	I0229 00:46:31.625973    3188 system_pods.go:126] duration metric: took 230.7288ms to wait for k8s-apps to be running ...
	I0229 00:46:31.625973    3188 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 00:46:31.639648    3188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 00:46:31.701736    3188 system_svc.go:56] duration metric: took 75.7585ms WaitForService to wait for kubelet.
	I0229 00:46:31.701736    3188 kubeadm.go:581] duration metric: took 11.4260307s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 00:46:31.701736    3188 node_conditions.go:102] verifying NodePressure condition ...
	I0229 00:46:31.749437    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:46:31.749437    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:46:31.749437    3188 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 00:46:31.749437    3188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 00:46:31.749437    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:46:31.775210    3188 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 00:46:31.775210    3188 node_conditions.go:123] node cpu capacity is 2
	I0229 00:46:31.775210    3188 node_conditions.go:105] duration metric: took 73.47ms to run NodePressure ...
	I0229 00:46:31.775210    3188 start.go:228] waiting for startup goroutines ...
	I0229 00:46:31.838526    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:46:31.839724    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:46:31.839895    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-611800 ).networkadapters[0]).ipaddresses[0]
	I0229 00:46:31.869714    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:46:31.869714    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:46:31.869714    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-611800 ).networkadapters[0]).ipaddresses[0]
	I0229 00:46:31.896108    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:46:31.897111    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:46:31.897111    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-611800 ).networkadapters[0]).ipaddresses[0]
	I0229 00:46:31.903110    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:46:31.903110    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:46:31.903110    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-611800 ).networkadapters[0]).ipaddresses[0]
	I0229 00:46:32.032333    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:46:32.032333    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:46:32.032333    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-611800 ).networkadapters[0]).ipaddresses[0]
	I0229 00:46:32.106234    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:46:32.106234    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:46:32.106234    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-611800 ).networkadapters[0]).ipaddresses[0]
	I0229 00:46:32.644308    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:46:32.644308    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:46:32.644308    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-611800 ).networkadapters[0]).ipaddresses[0]
	I0229 00:46:32.668559    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:46:32.668559    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:46:32.669426    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-611800 ).networkadapters[0]).ipaddresses[0]
	I0229 00:46:32.900620    3188 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0229 00:46:32.900620    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:46:35.358583    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:46:35.358583    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:46:35.358583    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-611800 ).networkadapters[0]).ipaddresses[0]
	I0229 00:46:36.834688    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:46:36.834688    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:46:36.834688    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-611800 ).networkadapters[0]).ipaddresses[0]
	I0229 00:46:37.334441    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:46:37.334441    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:46:37.334441    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-611800 ).networkadapters[0]).ipaddresses[0]
	I0229 00:46:37.604176    3188 main.go:141] libmachine: [stdout =====>] : 172.19.6.238
	
	I0229 00:46:37.604176    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:46:37.604176    3188 sshutil.go:53] new ssh client: &{IP:172.19.6.238 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-611800\id_rsa Username:docker}
	I0229 00:46:37.894738    3188 main.go:141] libmachine: [stdout =====>] : 172.19.6.238
	
	I0229 00:46:37.894738    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:46:37.894738    3188 sshutil.go:53] new ssh client: &{IP:172.19.6.238 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-611800\id_rsa Username:docker}
	I0229 00:46:37.976057    3188 main.go:141] libmachine: [stdout =====>] : 172.19.6.238
	
	I0229 00:46:37.976057    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:46:37.978232    3188 sshutil.go:53] new ssh client: &{IP:172.19.6.238 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-611800\id_rsa Username:docker}
	I0229 00:46:38.002003    3188 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0229 00:46:38.002003    3188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0229 00:46:38.129267    3188 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0229 00:46:38.129267    3188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0229 00:46:38.204980    3188 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0229 00:46:38.204980    3188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0229 00:46:38.250173    3188 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0229 00:46:38.250173    3188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0229 00:46:38.313961    3188 main.go:141] libmachine: [stdout =====>] : 172.19.6.238
	
	I0229 00:46:38.313961    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:46:38.315135    3188 sshutil.go:53] new ssh client: &{IP:172.19.6.238 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-611800\id_rsa Username:docker}
	I0229 00:46:38.329928    3188 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0229 00:46:38.329928    3188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0229 00:46:38.360459    3188 main.go:141] libmachine: [stdout =====>] : 172.19.6.238
	
	I0229 00:46:38.360459    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:46:38.361023    3188 sshutil.go:53] new ssh client: &{IP:172.19.6.238 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-611800\id_rsa Username:docker}
	I0229 00:46:38.413741    3188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0229 00:46:38.446721    3188 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0229 00:46:38.446721    3188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0229 00:46:38.470258    3188 main.go:141] libmachine: [stdout =====>] : 172.19.6.238
	
	I0229 00:46:38.470258    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:46:38.471253    3188 sshutil.go:53] new ssh client: &{IP:172.19.6.238 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-611800\id_rsa Username:docker}
	I0229 00:46:38.556354    3188 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0229 00:46:38.556454    3188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0229 00:46:38.565601    3188 main.go:141] libmachine: [stdout =====>] : 172.19.6.238
	
	I0229 00:46:38.565601    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:46:38.566570    3188 sshutil.go:53] new ssh client: &{IP:172.19.6.238 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-611800\id_rsa Username:docker}
	I0229 00:46:38.627654    3188 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0229 00:46:38.627654    3188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0229 00:46:38.636012    3188 main.go:141] libmachine: [stdout =====>] : 172.19.6.238
	
	I0229 00:46:38.636012    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:46:38.636012    3188 sshutil.go:53] new ssh client: &{IP:172.19.6.238 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-611800\id_rsa Username:docker}
	I0229 00:46:38.648624    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:46:38.648624    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:46:38.648624    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-611800 ).networkadapters[0]).ipaddresses[0]
	I0229 00:46:38.721121    3188 main.go:141] libmachine: [stdout =====>] : 172.19.6.238
	
	I0229 00:46:38.721121    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:46:38.721121    3188 sshutil.go:53] new ssh client: &{IP:172.19.6.238 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-611800\id_rsa Username:docker}
	I0229 00:46:38.776736    3188 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0229 00:46:38.776736    3188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0229 00:46:38.791975    3188 main.go:141] libmachine: [stdout =====>] : 172.19.6.238
	
	I0229 00:46:38.792047    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:46:38.792047    3188 sshutil.go:53] new ssh client: &{IP:172.19.6.238 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-611800\id_rsa Username:docker}
	I0229 00:46:38.836943    3188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0229 00:46:38.859555    3188 main.go:141] libmachine: [stdout =====>] : 172.19.6.238
	
	I0229 00:46:38.859555    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:46:38.860565    3188 sshutil.go:53] new ssh client: &{IP:172.19.6.238 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-611800\id_rsa Username:docker}
	I0229 00:46:38.869676    3188 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0229 00:46:38.869676    3188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0229 00:46:38.886368    3188 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0229 00:46:38.886368    3188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0229 00:46:38.968316    3188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0229 00:46:38.984325    3188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0229 00:46:39.107044    3188 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0229 00:46:39.107044    3188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0229 00:46:39.122023    3188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0229 00:46:39.182887    3188 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0229 00:46:39.182887    3188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0229 00:46:39.294768    3188 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0229 00:46:39.294857    3188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0229 00:46:39.359201    3188 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0229 00:46:39.359201    3188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0229 00:46:39.458229    3188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0229 00:46:39.459229    3188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0229 00:46:39.494293    3188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0229 00:46:39.499376    3188 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0229 00:46:39.499376    3188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0229 00:46:39.508835    3188 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0229 00:46:39.508835    3188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0229 00:46:39.580340    3188 main.go:141] libmachine: [stdout =====>] : 172.19.6.238
	
	I0229 00:46:39.580340    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:46:39.580340    3188 sshutil.go:53] new ssh client: &{IP:172.19.6.238 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-611800\id_rsa Username:docker}
	I0229 00:46:39.594338    3188 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0229 00:46:39.594338    3188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0229 00:46:39.715044    3188 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 00:46:39.715044    3188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0229 00:46:39.729382    3188 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0229 00:46:39.729480    3188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0229 00:46:39.815877    3188 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0229 00:46:39.815877    3188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0229 00:46:40.007525    3188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 00:46:40.026272    3188 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0229 00:46:40.026272    3188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0229 00:46:40.156675    3188 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0229 00:46:40.156675    3188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0229 00:46:40.227004    3188 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0229 00:46:40.227004    3188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0229 00:46:40.374517    3188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 00:46:40.492225    3188 main.go:141] libmachine: [stdout =====>] : 172.19.6.238
	
	I0229 00:46:40.492283    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:46:40.492283    3188 sshutil.go:53] new ssh client: &{IP:172.19.6.238 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-611800\id_rsa Username:docker}
	I0229 00:46:40.494346    3188 main.go:141] libmachine: [stdout =====>] : 172.19.6.238
	
	I0229 00:46:40.494486    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:46:40.494725    3188 sshutil.go:53] new ssh client: &{IP:172.19.6.238 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-611800\id_rsa Username:docker}
	I0229 00:46:40.513658    3188 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0229 00:46:40.513658    3188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0229 00:46:40.537060    3188 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0229 00:46:40.537122    3188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0229 00:46:40.717982    3188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.3039899s)
	I0229 00:46:40.860145    3188 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0229 00:46:40.860145    3188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0229 00:46:40.868211    3188 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0229 00:46:40.868211    3188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0229 00:46:41.150298    3188 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0229 00:46:41.150361    3188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0229 00:46:41.239975    3188 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0229 00:46:41.240065    3188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0229 00:46:41.283923    3188 main.go:141] libmachine: [stdout =====>] : 172.19.6.238
	
	I0229 00:46:41.284686    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:46:41.284880    3188 sshutil.go:53] new ssh client: &{IP:172.19.6.238 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-611800\id_rsa Username:docker}
	I0229 00:46:41.286885    3188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 00:46:41.349071    3188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0229 00:46:41.524907    3188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0229 00:46:41.709935    3188 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0229 00:46:41.709992    3188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0229 00:46:41.917886    3188 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0229 00:46:41.929433    3188 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0229 00:46:41.929592    3188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0229 00:46:42.150400    3188 addons.go:234] Setting addon gcp-auth=true in "addons-611800"
	I0229 00:46:42.150584    3188 host.go:66] Checking if "addons-611800" exists ...
	I0229 00:46:42.151160    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:46:42.170060    3188 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0229 00:46:42.170191    3188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0229 00:46:42.432365    3188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0229 00:46:44.462940    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:46:44.462940    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:46:44.472992    3188 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0229 00:46:44.472992    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-611800 ).state
	I0229 00:46:46.306680    3188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.4692651s)
	W0229 00:46:46.306791    3188 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0229 00:46:46.306914    3188 retry.go:31] will retry after 286.610901ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0229 00:46:46.308482    3188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.3397546s)
	I0229 00:46:46.309694    3188 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-611800 service yakd-dashboard -n yakd-dashboard
	
	I0229 00:46:46.615951    3188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0229 00:46:46.644235    3188 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 00:46:46.644235    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:46:46.645012    3188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-611800 ).networkadapters[0]).ipaddresses[0]
	I0229 00:46:49.200753    3188 main.go:141] libmachine: [stdout =====>] : 172.19.6.238
	
	I0229 00:46:49.200850    3188 main.go:141] libmachine: [stderr =====>] : 
	I0229 00:46:49.201206    3188 sshutil.go:53] new ssh client: &{IP:172.19.6.238 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-611800\id_rsa Username:docker}
	I0229 00:46:50.433770    3188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (11.4486675s)
	I0229 00:46:50.433770    3188 addons.go:470] Verifying addon ingress=true in "addons-611800"
	I0229 00:46:50.434415    3188 out.go:177] * Verifying ingress addon...
	I0229 00:46:50.434520    3188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.147123s)
	I0229 00:46:50.434520    3188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.0849411s)
	I0229 00:46:50.434415    3188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.059335s)
	I0229 00:46:50.434089    3188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.9741152s)
	I0229 00:46:50.435417    3188 addons.go:470] Verifying addon registry=true in "addons-611800"
	I0229 00:46:50.434195    3188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (10.9742453s)
	I0229 00:46:50.434304    3188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (10.9392056s)
	I0229 00:46:50.434304    3188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.4261958s)
	I0229 00:46:50.433959    3188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (11.3101173s)
	I0229 00:46:50.434840    3188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.9093757s)
	I0229 00:46:50.436042    3188 out.go:177] * Verifying registry addon...
	I0229 00:46:50.436042    3188 addons.go:470] Verifying addon metrics-server=true in "addons-611800"
	I0229 00:46:50.437407    3188 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0229 00:46:50.438179    3188 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0229 00:46:50.543461    3188 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0229 00:46:50.543461    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:46:50.543461    3188 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0229 00:46:50.543461    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0229 00:46:50.566412    3188 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0229 00:46:50.955650    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:46:50.956125    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:46:51.461250    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:46:51.462202    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:46:51.950665    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:46:51.966208    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:46:52.414293    3188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.798017s)
	I0229 00:46:52.414364    3188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.9814394s)
	I0229 00:46:52.414364    3188 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (7.9409269s)
	I0229 00:46:52.414464    3188 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-611800"
	I0229 00:46:52.415461    3188 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231226-1a7112e06
	I0229 00:46:52.415910    3188 out.go:177] * Verifying csi-hostpath-driver addon...
	I0229 00:46:52.416506    3188 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.1
	I0229 00:46:52.417005    3188 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0229 00:46:52.417078    3188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0229 00:46:52.417783    3188 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0229 00:46:52.451678    3188 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0229 00:46:52.451678    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:46:52.477510    3188 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0229 00:46:52.477578    3188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0229 00:46:52.508849    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:46:52.512706    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:46:52.556809    3188 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0229 00:46:52.556809    3188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5447 bytes)
	I0229 00:46:52.621924    3188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0229 00:46:52.951127    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:46:52.970852    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:46:52.975361    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:46:53.449511    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:46:53.465223    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:46:53.468134    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:46:53.954856    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:46:53.967312    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:46:53.974759    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:46:54.167537    3188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.5455256s)
	I0229 00:46:54.175821    3188 addons.go:470] Verifying addon gcp-auth=true in "addons-611800"
	I0229 00:46:54.176823    3188 out.go:177] * Verifying gcp-auth addon...
	I0229 00:46:54.178820    3188 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0229 00:46:54.201819    3188 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0229 00:46:54.201819    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:46:54.436603    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:46:54.449337    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:46:54.452382    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:46:54.684918    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:46:54.927275    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:46:54.957971    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:46:54.960008    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:46:55.192523    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:46:55.432352    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:46:55.444440    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:46:55.444440    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:46:55.695147    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:46:55.937972    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:46:55.943130    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:46:55.950785    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:46:56.186717    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:46:56.441650    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:46:56.446164    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:46:56.447132    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:46:56.691895    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:46:56.944244    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:46:56.953544    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:46:56.965044    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:46:57.201250    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:46:57.442754    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:46:57.448949    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:46:57.449401    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:46:57.690813    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:46:57.931986    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:46:57.963302    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:46:57.963438    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:46:58.198781    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:46:58.436542    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:46:58.450972    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:46:58.453128    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:46:58.686336    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:46:58.926835    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:46:58.955763    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:46:58.957402    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:46:59.195557    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:46:59.436134    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:46:59.450749    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:46:59.450749    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:46:59.685765    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:46:59.926977    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:46:59.958066    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:46:59.958066    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:00.194099    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:00.434688    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:00.448265    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:00.448627    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:00.685539    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:00.940139    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:00.949335    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:00.950856    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:01.188729    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:01.430332    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:01.460626    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:01.460626    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:01.694958    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:01.933131    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:01.949332    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:01.951881    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:02.201748    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:02.438403    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:02.450551    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:02.458331    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:02.687830    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:02.927128    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:02.958078    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:02.958078    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:03.192696    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:03.432030    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:03.445643    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:03.446646    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:03.696700    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:03.937112    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:03.956882    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:03.958622    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:04.189794    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:04.442010    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:04.517589    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:04.524787    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:05.378322    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:05.384445    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:05.384768    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:05.393767    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:05.406142    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:05.440612    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:05.447997    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:05.451460    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:05.689968    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:05.929834    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:05.958894    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:05.958975    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:06.196352    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:06.439328    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:06.445197    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:06.446066    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:06.688855    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:06.943426    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:06.945969    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:06.948820    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:07.192556    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:07.733019    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:07.735396    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:07.736220    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:07.736220    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:07.927194    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:07.958002    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:07.958002    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:08.191647    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:08.433720    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:08.467377    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:08.467642    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:08.689095    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:08.928920    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:08.958995    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:08.959297    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:09.191148    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:09.429119    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:09.461398    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:09.461595    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:09.695712    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:09.938157    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:09.955481    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:09.959126    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:10.191422    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:10.428458    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:10.456731    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:10.468189    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:10.691827    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:10.938541    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:10.950775    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:10.951098    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:11.198102    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:11.438470    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:11.449941    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:11.451047    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:11.686791    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:11.943621    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:11.958138    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:11.970917    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:12.191611    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:12.428607    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:12.460726    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:12.460726    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:12.697000    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:12.936591    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:12.950886    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:12.952079    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:13.188209    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:13.429574    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:13.459774    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:13.460473    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:13.696756    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:13.934411    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:13.948243    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:13.948901    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:14.198559    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:14.438724    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:14.446350    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:14.451337    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:14.689395    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:14.943753    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:14.958007    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:14.960398    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:15.192336    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:15.431585    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:15.454826    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:15.457868    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:15.697837    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:15.936309    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:15.951850    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:15.952460    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:16.198866    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:16.440186    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:16.450357    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:16.452953    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:16.691600    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:16.930199    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:16.959177    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:16.962039    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:17.195200    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:17.437692    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:17.451407    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:17.452730    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:17.690190    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:17.930011    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:17.961552    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:17.961626    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:18.197307    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:18.435585    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:18.449010    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:18.451705    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:18.698970    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:18.939946    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:18.952685    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:18.952685    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:19.193058    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:19.431278    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:19.459438    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:19.460014    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:19.692932    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:19.937732    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:19.963376    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:19.964487    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:20.187813    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:20.428786    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:20.456692    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:20.459371    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:20.691644    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:20.931247    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:20.963508    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:20.966457    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:21.196173    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:21.438276    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:21.451824    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:21.453279    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:21.689149    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:21.931429    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:21.961757    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:21.963760    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:22.198104    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:22.439175    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:22.452632    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:22.457095    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:22.691061    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:22.930141    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:22.961720    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:22.963053    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:23.205938    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:23.440353    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:23.452009    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:23.452528    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:23.691880    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:23.947575    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:23.957769    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:23.969231    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:24.198450    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:24.439363    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:24.445241    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:24.449087    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:24.688468    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:24.930260    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:24.960009    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:24.961014    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:25.194843    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:25.434190    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:25.449990    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:25.450600    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:25.699345    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:25.938834    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:25.954206    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:25.957485    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:26.186712    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:26.428483    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:26.459285    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:26.459579    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:26.693607    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:26.933498    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:26.952514    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:26.954826    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:27.199685    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:27.435443    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:27.448245    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:27.450178    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:27.699195    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:27.971457    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:27.972952    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:27.974494    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:28.205120    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:28.437912    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:28.450220    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:28.450754    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:28.687959    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:28.946750    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:28.960148    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:28.962665    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:29.190154    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:29.439505    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:29.461410    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:29.461410    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:29.693396    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:29.934778    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:29.982435    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:29.983061    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:30.194983    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:30.435963    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:30.451639    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:30.452000    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:30.688804    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:30.932265    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:30.967216    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:30.970234    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:31.196297    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:31.438782    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:31.467114    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:31.469600    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:31.687661    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:31.929046    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:31.960576    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:31.960647    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:32.200186    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:32.435452    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:32.447913    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:32.447913    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:32.702367    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:32.945260    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:32.951096    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:32.956232    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:33.193781    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:33.436430    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:33.451983    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:33.452446    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:33.687176    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:33.930866    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:33.961696    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:33.962660    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:34.195012    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:34.433296    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:34.450706    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:34.455587    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:34.699291    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:34.943868    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:34.951504    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:34.951565    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:35.192249    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:35.433572    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:35.460139    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:35.462032    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:35.698873    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:35.933859    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:35.949261    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:35.952254    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:36.317546    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:36.436514    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:36.448143    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:36.449001    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:36.698466    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:36.936031    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:36.948309    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:36.948561    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:37.195905    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:37.432417    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:37.449878    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:37.454567    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:37.696074    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:37.960049    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:37.961432    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:37.974350    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:38.378560    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:38.440044    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:38.454420    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:38.456072    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:38.693849    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:38.933953    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:38.949509    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:38.949509    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:39.195140    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:39.432202    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:39.465951    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:39.467265    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:39.697269    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:39.937216    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:39.952841    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:39.953226    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:40.200033    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:40.436739    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:40.454309    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:40.454412    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:40.700776    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:40.945347    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:40.951114    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:40.953312    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:41.192389    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:41.435527    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:41.469165    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:41.469406    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:41.700765    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:41.940665    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:41.957204    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:41.957952    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:42.193260    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:42.432864    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:42.462954    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:42.463130    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:42.697615    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:42.938920    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:42.953729    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:42.956318    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:43.190303    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:43.431018    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:43.461610    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:43.462473    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:43.694266    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:43.938177    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:43.950048    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:43.954137    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:44.200230    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:44.440164    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:44.453561    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:44.453817    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:44.688874    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:44.944913    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:44.949422    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:44.953876    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:45.196120    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:45.442075    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:45.446644    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:45.448278    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:45.692802    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:45.934821    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:45.950143    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:45.955886    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:46.194058    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:46.431582    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:46.463276    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:46.464751    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:46.696746    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:46.937382    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:46.949543    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:46.951085    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:47.199874    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:47.441200    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:47.447766    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:47.453532    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:47.693238    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:47.932984    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:47.961606    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:47.963327    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:48.198174    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:48.438661    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:48.450966    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:48.454025    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:48.703283    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:48.945844    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:48.956148    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:48.956843    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:49.193608    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:49.439344    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:49.450542    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:49.453029    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:50.022991    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:50.023162    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:50.024887    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:50.029369    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:50.225509    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:50.432769    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:50.476261    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:50.489575    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:50.725548    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:50.934434    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:50.964261    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:50.964261    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:51.196015    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:51.436387    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:51.447046    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:51.447046    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:51.698251    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:52.035443    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:52.036932    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:52.042264    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:52.198442    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:52.435740    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:52.447044    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:52.453249    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:52.695136    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:52.936982    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:52.959663    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:52.960292    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:53.189587    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:53.446377    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:53.453495    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:53.453645    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:53.694320    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:53.936590    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:53.953872    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:53.954570    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:54.188599    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:54.448490    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:54.456049    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:54.460284    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:54.692561    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:54.944589    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:54.954578    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:54.955579    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:55.200027    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:55.501835    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:55.502666    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:55.508571    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:55.704363    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:55.930115    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:55.959966    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:55.970401    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:56.196906    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:56.439296    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:56.454098    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:56.455454    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:56.690367    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:56.934199    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:56.964302    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:56.966032    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:57.199082    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:57.443745    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:57.454018    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:57.454338    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:57.700361    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:57.943803    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:57.951512    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:57.956691    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:58.194934    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:58.433212    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:58.447229    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:58.450726    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:58.701582    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:58.942342    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:58.959153    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:58.959686    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:59.192657    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:59.432234    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:59.464600    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:59.464846    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:47:59.697914    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:47:59.938378    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:47:59.953737    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:47:59.956359    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:48:00.203297    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:00.442352    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:00.448889    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:48:00.449492    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:00.693373    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:00.933515    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:00.963655    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:00.963655    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:48:01.197037    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:01.438768    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:01.452200    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:48:01.456007    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:01.701776    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:01.952251    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:01.958447    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:48:01.958447    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:02.194058    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:02.433785    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:02.450130    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:48:02.450376    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:02.702699    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:02.945262    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:02.953717    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:48:02.953717    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:03.194876    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:03.436165    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:03.452254    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:48:03.452398    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:03.702002    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:03.940564    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:03.956186    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:03.958655    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:48:04.191626    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:04.434368    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:04.465291    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:04.467039    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:48:04.819059    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:04.940352    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:04.951378    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:04.953985    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:48:05.201889    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:05.585995    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:48:05.586065    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:05.586880    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:05.981072    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:48:05.983496    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:05.985413    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:05.987893    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:06.208170    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:06.453414    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:06.465822    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 00:48:06.465883    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:06.695638    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:06.944383    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:06.953713    3188 kapi.go:107] duration metric: took 1m16.5112771s to wait for kubernetes.io/minikube-addons=registry ...
	I0229 00:48:06.954326    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:07.189241    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:07.431198    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:07.459187    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:07.693907    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:07.934133    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:07.947482    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:08.201652    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:08.444614    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:08.449291    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:08.695627    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:08.966256    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:08.967255    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:09.187550    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:09.429683    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:09.458362    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:09.697667    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:09.939063    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:09.953941    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:10.201492    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:10.444528    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:10.452876    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:10.694884    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:11.734637    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:11.735282    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:11.735920    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:11.775332    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:11.776058    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:11.778992    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:11.953261    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:11.966469    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:12.200967    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:12.442350    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:12.450983    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:12.691245    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:12.946354    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:12.950043    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:13.193726    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:13.437239    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:13.448279    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:13.702036    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:13.941921    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:13.955136    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:14.191024    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:14.431409    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:14.460720    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:14.698570    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:14.940464    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:14.953383    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:15.192745    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:15.433018    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:15.463613    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:15.700422    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:15.940700    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:15.952797    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:16.280934    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:16.441058    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:16.452725    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:16.689086    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:16.934168    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:16.961146    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:17.199396    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:17.436299    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:17.450812    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:17.700484    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:17.944600    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:17.964353    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:18.188953    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:18.447978    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:18.452594    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:18.692773    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:18.938314    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:18.972489    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:19.199869    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:19.440172    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:19.454719    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:19.690119    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:19.954514    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:19.962719    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:20.196139    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:20.441182    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:20.453276    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:20.773699    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:21.079796    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:21.084040    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:21.191884    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:21.430125    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:21.461091    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:21.699324    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:21.933602    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:21.963691    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:22.204149    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:22.438489    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:22.454857    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:22.703108    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:22.931023    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:22.963146    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:23.198312    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:23.440152    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:23.453649    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:23.690654    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:23.936216    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:23.959179    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:24.200574    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:24.437434    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:24.450433    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:24.702438    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:24.938790    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:24.954665    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:25.200676    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:25.436417    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:25.448588    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:25.697152    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:25.935574    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:25.963837    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:26.195530    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:26.433291    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:26.462595    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:26.699355    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:26.945598    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:26.951105    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:27.202732    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:27.439453    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:27.457495    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:27.701503    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:27.937139    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:27.948269    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:28.201820    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:28.442816    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:28.449117    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:28.695655    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:28.932344    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:28.962556    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:29.198894    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:29.441284    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:29.453435    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:29.704344    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:29.943911    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:29.949205    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:30.194689    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:30.435115    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:30.462671    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:30.695049    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:30.937275    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:30.955254    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:31.203647    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:31.445685    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:31.450342    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:31.696885    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:31.936352    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:31.962925    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:32.198185    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:32.440015    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:32.452323    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:32.695162    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:32.947361    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:32.953000    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:33.200252    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:33.440476    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:33.457885    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:33.701234    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:33.941452    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:33.957733    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:34.195945    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:34.437809    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:34.463001    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:34.703139    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:34.944027    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:34.952701    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:35.205332    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:35.444747    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:35.449754    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:35.695319    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:35.934018    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:35.961657    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:36.194339    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:36.432281    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:36.463120    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:36.700025    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:36.954727    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:36.959600    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:37.198751    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:37.437851    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:37.451884    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:37.690439    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:37.931930    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:37.961810    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:38.228853    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:38.441094    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:38.455200    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:38.703925    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:38.954191    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:38.964107    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:39.204904    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:39.441855    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:39.454091    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:39.697579    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:39.934502    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:39.962177    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:40.198865    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:40.438591    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:40.452156    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:40.702231    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:40.944921    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:40.951432    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:41.196486    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:41.436945    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:41.451216    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:41.706713    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:41.979853    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:41.990124    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:42.194479    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:42.447749    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:42.451276    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:42.694819    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:42.940328    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:42.962750    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:43.198538    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:43.438665    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:43.451807    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:43.691434    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:44.058520    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:44.061599    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:44.198796    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:44.442342    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:44.456790    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:44.693509    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:44.933663    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:44.976290    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:45.200098    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:45.438443    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:45.454171    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:45.701834    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:45.958893    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:45.960891    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:46.190566    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:46.445633    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:46.450750    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:46.732812    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:46.940157    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:46.968006    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:47.197090    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:47.432813    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:47.463594    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:47.698361    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:47.946211    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:47.952009    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:48.196068    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:48.442348    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:48.455408    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:48.706597    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:48.953501    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:48.958639    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:49.193800    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:49.435721    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:49.465876    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:49.699559    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:49.941567    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:49.953816    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:50.204938    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:50.441640    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:50.456646    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:50.693246    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:50.952567    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:50.958246    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:51.191403    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:51.448775    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:51.453918    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:51.696655    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:51.940579    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:51.963918    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:52.196074    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:52.452992    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:52.458207    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:52.693864    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:52.947293    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:52.951104    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:53.197330    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:53.434250    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:53.463710    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:53.702591    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:53.940807    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:53.960254    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:54.197020    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:54.440264    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:54.452002    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:54.702308    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:54.945502    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:54.950430    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:55.196906    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:55.437616    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:55.465378    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:55.702730    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:55.945976    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 00:48:55.952558    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:56.203163    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:56.443000    3188 kapi.go:107] duration metric: took 2m4.0182133s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0229 00:48:56.455054    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:56.703409    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:56.955145    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:57.207648    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:57.460737    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:57.939759    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:57.954878    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:58.205664    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:58.460882    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:58.697101    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:58.962551    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:59.197643    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:59.465864    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:48:59.699805    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:48:59.967992    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:49:00.197276    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:00.462521    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:49:00.696278    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:00.956682    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:49:01.206752    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:01.455648    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:49:01.704916    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:01.957870    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:49:02.207061    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:02.458843    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:49:02.705925    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:02.955909    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:49:03.205125    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:03.456425    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:49:03.705380    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:03.956829    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:49:04.205936    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:04.457289    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:49:04.695050    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:04.964430    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:49:05.199223    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:05.465192    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:49:05.697061    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:05.963332    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:49:06.197919    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:06.464859    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:49:06.699280    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:06.969802    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:49:07.201428    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:07.454442    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:49:07.706697    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:07.960945    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:49:08.193290    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:08.476005    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:49:08.701852    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:08.957955    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:49:09.206910    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:09.470139    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:49:09.700295    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:09.953057    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:49:10.206068    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:10.459322    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:49:10.696052    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:10.966444    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:49:11.202762    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:11.457061    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:49:11.694075    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:11.965470    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:49:12.203880    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:12.456393    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:49:12.692241    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:12.963262    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:49:13.199706    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:13.453377    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:49:13.703791    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:13.957501    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:49:14.193411    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:14.463084    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:49:14.694208    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:14.951243    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:49:15.204017    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:15.462503    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:49:15.700583    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:15.968518    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:49:16.198291    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:16.468172    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:49:16.703631    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:16.956994    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:49:17.207797    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:17.464945    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:49:17.701535    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:17.956333    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:49:18.193750    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:18.463783    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:49:18.706324    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:18.967126    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:49:19.203237    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:19.457773    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:49:19.693571    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:19.962895    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:49:20.199719    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:20.453294    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:49:20.705843    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:20.959360    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:49:21.194907    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:21.469170    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:49:21.701382    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:22.026003    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:49:22.201720    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:22.466369    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:49:22.697180    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:23.033324    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:49:23.196514    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:23.474485    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:49:23.704539    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:23.957695    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:49:24.193290    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:24.465688    3188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 00:49:24.700049    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:24.967112    3188 kapi.go:107] duration metric: took 2m34.5210512s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0229 00:49:25.204142    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:25.695753    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:26.203629    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:26.693999    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:27.193539    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:27.695689    3188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 00:49:28.197481    3188 kapi.go:107] duration metric: took 2m34.010036s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0229 00:49:28.198431    3188 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-611800 cluster.
	I0229 00:49:28.199140    3188 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0229 00:49:28.199810    3188 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0229 00:49:28.200521    3188 out.go:177] * Enabled addons: nvidia-device-plugin, yakd, storage-provisioner, helm-tiller, inspektor-gadget, cloud-spanner, ingress-dns, metrics-server, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0229 00:49:28.201262    3188 addons.go:505] enable addons completed in 3m8.6684361s: enabled=[nvidia-device-plugin yakd storage-provisioner helm-tiller inspektor-gadget cloud-spanner ingress-dns metrics-server default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0229 00:49:28.201338    3188 start.go:233] waiting for cluster config update ...
	I0229 00:49:28.201338    3188 start.go:242] writing updated cluster config ...
	I0229 00:49:28.213087    3188 ssh_runner.go:195] Run: rm -f paused
	I0229 00:49:28.414571    3188 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 00:49:28.415074    3188 out.go:177] * Done! kubectl is now configured to use "addons-611800" cluster and "default" namespace by default
	
	
	==> Docker <==
	Feb 29 00:50:19 addons-611800 dockerd[1300]: time="2024-02-29T00:50:19.813016714Z" level=info msg="shim disconnected" id=0e24e45dde21a27b0bf253b61c83cc2efb5c9abcf6be8bf69e474a753d189aa3 namespace=moby
	Feb 29 00:50:19 addons-611800 dockerd[1300]: time="2024-02-29T00:50:19.813248823Z" level=warning msg="cleaning up after shim disconnected" id=0e24e45dde21a27b0bf253b61c83cc2efb5c9abcf6be8bf69e474a753d189aa3 namespace=moby
	Feb 29 00:50:19 addons-611800 dockerd[1300]: time="2024-02-29T00:50:19.813358428Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 29 00:50:19 addons-611800 dockerd[1300]: time="2024-02-29T00:50:19.915215132Z" level=info msg="shim disconnected" id=d99e50e45e81e01b8023a95d233cd2e75d84f9f84b64355dd882a7d3a0dd274a namespace=moby
	Feb 29 00:50:19 addons-611800 dockerd[1294]: time="2024-02-29T00:50:19.915927361Z" level=info msg="ignoring event" container=d99e50e45e81e01b8023a95d233cd2e75d84f9f84b64355dd882a7d3a0dd274a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 00:50:19 addons-611800 dockerd[1300]: time="2024-02-29T00:50:19.918502464Z" level=warning msg="cleaning up after shim disconnected" id=d99e50e45e81e01b8023a95d233cd2e75d84f9f84b64355dd882a7d3a0dd274a namespace=moby
	Feb 29 00:50:19 addons-611800 dockerd[1300]: time="2024-02-29T00:50:19.918580667Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 29 00:50:20 addons-611800 dockerd[1294]: time="2024-02-29T00:50:20.257260509Z" level=info msg="ignoring event" container=643d9df51998c2234c78488fa468928d5803390dca2dd533be1b13f059eaf52e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 00:50:20 addons-611800 dockerd[1300]: time="2024-02-29T00:50:20.259573702Z" level=info msg="shim disconnected" id=643d9df51998c2234c78488fa468928d5803390dca2dd533be1b13f059eaf52e namespace=moby
	Feb 29 00:50:20 addons-611800 dockerd[1300]: time="2024-02-29T00:50:20.259782210Z" level=warning msg="cleaning up after shim disconnected" id=643d9df51998c2234c78488fa468928d5803390dca2dd533be1b13f059eaf52e namespace=moby
	Feb 29 00:50:20 addons-611800 dockerd[1300]: time="2024-02-29T00:50:20.259799811Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 29 00:50:20 addons-611800 dockerd[1300]: time="2024-02-29T00:50:20.418974323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 00:50:20 addons-611800 dockerd[1300]: time="2024-02-29T00:50:20.419064726Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 00:50:20 addons-611800 dockerd[1300]: time="2024-02-29T00:50:20.419083927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 00:50:20 addons-611800 dockerd[1300]: time="2024-02-29T00:50:20.419538445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 00:50:20 addons-611800 cri-dockerd[1186]: time="2024-02-29T00:50:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/08c4732df71799c4cb8acaf8213b484bafe1f3e40c44e31a30d4974fe6d5513b/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Feb 29 00:50:25 addons-611800 cri-dockerd[1186]: time="2024-02-29T00:50:25Z" level=info msg="Stop pulling image docker.io/nginx:alpine: Status: Downloaded newer image for nginx:alpine"
	Feb 29 00:50:25 addons-611800 dockerd[1300]: time="2024-02-29T00:50:25.386767771Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 00:50:25 addons-611800 dockerd[1300]: time="2024-02-29T00:50:25.387423998Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 00:50:25 addons-611800 dockerd[1300]: time="2024-02-29T00:50:25.387456399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 00:50:25 addons-611800 dockerd[1300]: time="2024-02-29T00:50:25.387904718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 00:50:34 addons-611800 dockerd[1294]: time="2024-02-29T00:50:34.945031704Z" level=info msg="ignoring event" container=992b93ac565d5a756eb08b6ee91120f9adaf89e1aca3bfec89aceec189107bae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 00:50:34 addons-611800 dockerd[1300]: time="2024-02-29T00:50:34.948775057Z" level=info msg="shim disconnected" id=992b93ac565d5a756eb08b6ee91120f9adaf89e1aca3bfec89aceec189107bae namespace=moby
	Feb 29 00:50:34 addons-611800 dockerd[1300]: time="2024-02-29T00:50:34.948842460Z" level=warning msg="cleaning up after shim disconnected" id=992b93ac565d5a756eb08b6ee91120f9adaf89e1aca3bfec89aceec189107bae namespace=moby
	Feb 29 00:50:34 addons-611800 dockerd[1300]: time="2024-02-29T00:50:34.948871961Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	a603231869a79       nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9                                                                10 seconds ago       Running             nginx                                    0                   08c4732df7179       nginx
	76e508e83c12f       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:efddd4f0a8b51a7c406c67894203bc475198f54809105ce0c2df904a44180e75                            23 seconds ago       Exited              gadget                                   4                   992b93ac565d5       gadget-7dbzk
	de81ce929f27e       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:01b0de782aa30e7fc91ac5a91b5cc35e95e9679dee7ef07af06457b471f88f32                                 About a minute ago   Running             gcp-auth                                 0                   bc265b3ca6fb3       gcp-auth-5f6b4f85fd-bk6j9
	8bff5b4f5c014       registry.k8s.io/ingress-nginx/controller@sha256:1405cc613bd95b2c6edd8b2a152510ae91c7e62aea4698500d23b2145960ab9c                             About a minute ago   Running             controller                               0                   65009aa5ee30d       ingress-nginx-controller-7967645744-thqdf
	f5d2fcfae1b60       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          About a minute ago   Running             csi-snapshotter                          0                   686766c2e739e       csi-hostpathplugin-cgpk8
	ba8c004b66cc4       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          About a minute ago   Running             csi-provisioner                          0                   686766c2e739e       csi-hostpathplugin-cgpk8
	73fb903ce76ed       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            About a minute ago   Running             liveness-probe                           0                   686766c2e739e       csi-hostpathplugin-cgpk8
	1b6a68462f85c       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           About a minute ago   Running             hostpath                                 0                   686766c2e739e       csi-hostpathplugin-cgpk8
	55a7f1d4f2b6c       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                About a minute ago   Running             node-driver-registrar                    0                   686766c2e739e       csi-hostpathplugin-cgpk8
	063f57d98d93a       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              About a minute ago   Running             csi-resizer                              0                   06a6f80bea90e       csi-hostpath-resizer-0
	9aa673df955f5       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             2 minutes ago        Running             csi-attacher                             0                   dc7031c3025c2       csi-hostpath-attacher-0
	26826694ce65c       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   2 minutes ago        Running             csi-external-health-monitor-controller   0                   686766c2e739e       csi-hostpathplugin-cgpk8
	afe9e5615e079       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:25d6a5f11211cc5c3f9f2bf552b585374af287b4debf693cacbe2da47daa5084                   2 minutes ago        Exited              patch                                    0                   add2e63f2612e       ingress-nginx-admission-patch-z24xm
	5e9260a8ddf37       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:25d6a5f11211cc5c3f9f2bf552b585374af287b4debf693cacbe2da47daa5084                   2 minutes ago        Exited              create                                   0                   18a54daec06d1       ingress-nginx-admission-create-vwnfs
	d0695afada72e       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       2 minutes ago        Running             local-path-provisioner                   0                   b6853e8575079       local-path-provisioner-78b46b4d5c-h2rq5
	7528c2fecb7af       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      2 minutes ago        Running             volume-snapshot-controller               0                   2e72e6647c73b       snapshot-controller-58dbcc7b99-gh6xl
	69b43646716da       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      2 minutes ago        Running             volume-snapshot-controller               0                   5eea3cdfa523c       snapshot-controller-58dbcc7b99-nx4c2
	f7a730a68cf21       gcr.io/cloud-spanner-emulator/emulator@sha256:41d5dccfcf13817a2348beba0ca7c650ffdd795f7fcbe975b7822c9eed262e15                               2 minutes ago        Running             cloud-spanner-emulator                   0                   d2f979896b66c       cloud-spanner-emulator-6548d5df46-x7c4s
	c6924ad372c0b       marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                                                        2 minutes ago        Running             yakd                                     0                   5fd24c85dffc6       yakd-dashboard-9947fc6bf-g8dx5
	2e012a02288e8       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f                             3 minutes ago        Running             minikube-ingress-dns                     0                   0256fd94e2760       kube-ingress-dns-minikube
	9c8c5df228da2       nvcr.io/nvidia/k8s-device-plugin@sha256:2388c1f792daf3e810a6b43cdf709047183b50f5ec3ed476fae6aa0a07e68acc                                     3 minutes ago        Running             nvidia-device-plugin-ctr                 0                   062a6463c67e3       nvidia-device-plugin-daemonset-l9vxg
	7586fd44fd786       6e38f40d628db                                                                                                                                3 minutes ago        Running             storage-provisioner                      0                   f6562937de180       storage-provisioner
	94b44a03f6b66       ead0a4a53df89                                                                                                                                4 minutes ago        Running             coredns                                  0                   4e6cff8b5d715       coredns-5dd5756b68-kt79c
	d909df06ae7ba       83f6cc407eed8                                                                                                                                4 minutes ago        Running             kube-proxy                               0                   0bfba6f9215a7       kube-proxy-qf92m
	83907237e123c       d058aa5ab969c                                                                                                                                4 minutes ago        Running             kube-controller-manager                  0                   7c220eb360866       kube-controller-manager-addons-611800
	993fcd4e7a763       e3db313c6dbc0                                                                                                                                4 minutes ago        Running             kube-scheduler                           0                   6f3cd77920e19       kube-scheduler-addons-611800
	525e3de0d0f3d       7fe0e6f37db33                                                                                                                                4 minutes ago        Running             kube-apiserver                           0                   d9e782044d284       kube-apiserver-addons-611800
	9551812a0f935       73deb9a3f7025                                                                                                                                4 minutes ago        Running             etcd                                     0                   8fe60828e0d07       etcd-addons-611800
	
	
	==> controller_ingress [8bff5b4f5c01] <==
	I0229 00:49:24.051461       7 event.go:298] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"91ff6e73-c108-4c09-8523-b07df1dfd5f9", APIVersion:"v1", ResourceVersion:"697", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0229 00:49:24.069815       7 event.go:298] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"f852d7bb-e0ea-477f-ba67-cb64ab97109a", APIVersion:"v1", ResourceVersion:"698", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0229 00:49:24.072907       7 event.go:298] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"0fc3a95e-1a08-4193-9286-324c0c508c15", APIVersion:"v1", ResourceVersion:"699", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0229 00:49:25.227583       7 nginx.go:303] "Starting NGINX process"
	I0229 00:49:25.228251       7 leaderelection.go:245] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0229 00:49:25.228263       7 nginx.go:323] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0229 00:49:25.228678       7 controller.go:190] "Configuration changes detected, backend reload required"
	I0229 00:49:25.266389       7 leaderelection.go:255] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0229 00:49:25.266554       7 status.go:84] "New leader elected" identity="ingress-nginx-controller-7967645744-thqdf"
	I0229 00:49:25.280537       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-7967645744-thqdf" node="addons-611800"
	I0229 00:49:25.402468       7 controller.go:210] "Backend successfully reloaded"
	I0229 00:49:25.402629       7 controller.go:221] "Initial sync, sleeping for 1 second"
	I0229 00:49:25.403149       7 event.go:298] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7967645744-thqdf", UID:"d6c775fa-4122-4b86-8ad0-f8813f681531", APIVersion:"v1", ResourceVersion:"722", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0229 00:50:19.480093       7 controller.go:1108] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0229 00:50:19.613996       7 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.134s renderingIngressLength:1 renderingIngressTime:0s admissionTime:18.0kBs testedConfigurationSize:0.134}
	I0229 00:50:19.614040       7 main.go:107] "successfully validated configuration, accepting" ingress="default/nginx-ingress"
	I0229 00:50:19.628745       7 store.go:440] "Found valid IngressClass" ingress="default/nginx-ingress" ingressclass="nginx"
	I0229 00:50:19.629409       7 event.go:298] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"1e6e2c69-3d43-464f-8d29-11b207ffdbe6", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1580", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	W0229 00:50:20.800250       7 controller.go:1214] Service "default/nginx" does not have any active Endpoint.
	I0229 00:50:20.800401       7 controller.go:190] "Configuration changes detected, backend reload required"
	I0229 00:50:20.911304       7 controller.go:210] "Backend successfully reloaded"
	I0229 00:50:20.912230       7 event.go:298] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7967645744-thqdf", UID:"d6c775fa-4122-4b86-8ad0-f8813f681531", APIVersion:"v1", ResourceVersion:"722", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0229 00:50:24.134578       7 controller.go:1214] Service "default/nginx" does not have any active Endpoint.
	I0229 00:50:25.285478       7 status.go:304] "updating Ingress status" namespace="default" ingress="nginx-ingress" currentValue=null newValue=[{"ip":"172.19.6.238"}]
	I0229 00:50:25.297820       7 event.go:298] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"1e6e2c69-3d43-464f-8d29-11b207ffdbe6", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1619", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	
	
	==> coredns [94b44a03f6b6] <==
	[INFO] plugin/reload: Running configuration SHA512 = 6c43704ee218c3500d97c54254d76c1d56cc0443961fea557ef898f1da8154a1212605c10203ede1e288070d97e67d107ee3d60ae9c1e40b060414629f7811dd
	[INFO] Reloading complete
	[INFO] 127.0.0.1:55261 - 24912 "HINFO IN 5021765644634492659.3115291979992639838. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.123378148s
	[INFO] 10.244.0.7:46504 - 30727 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000405813s
	[INFO] 10.244.0.7:46504 - 61192 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000114904s
	[INFO] 10.244.0.7:49933 - 24574 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000255708s
	[INFO] 10.244.0.7:49933 - 25315 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000231608s
	[INFO] 10.244.0.7:59238 - 52034 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000103004s
	[INFO] 10.244.0.7:59238 - 43073 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000421013s
	[INFO] 10.244.0.7:47430 - 5519 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000171805s
	[INFO] 10.244.0.7:47430 - 19594 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000075202s
	[INFO] 10.244.0.7:54761 - 25260 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000103803s
	[INFO] 10.244.0.7:47367 - 49183 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000049102s
	[INFO] 10.244.0.7:34604 - 23698 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000164306s
	[INFO] 10.244.0.7:58990 - 53551 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000092203s
	[INFO] 10.244.0.21:36834 - 39779 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000413015s
	[INFO] 10.244.0.21:41934 - 3529 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000294211s
	[INFO] 10.244.0.21:48777 - 39518 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000100303s
	[INFO] 10.244.0.21:34957 - 49308 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000095304s
	[INFO] 10.244.0.21:57147 - 41707 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000086803s
	[INFO] 10.244.0.21:50212 - 53738 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000211708s
	[INFO] 10.244.0.21:37280 - 30275 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 230 0.00211768s
	[INFO] 10.244.0.21:33878 - 43791 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 240 0.002307587s
	[INFO] 10.244.0.25:60274 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000415717s
	[INFO] 10.244.0.25:38865 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000143306s
	
	
	==> describe nodes <==
	Name:               addons-611800
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-611800
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61
	                    minikube.k8s.io/name=addons-611800
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_29T00_46_06_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-611800
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-611800"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 00:46:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-611800
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Feb 2024 00:50:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 00:50:14 +0000   Thu, 29 Feb 2024 00:46:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 00:50:14 +0000   Thu, 29 Feb 2024 00:46:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 00:50:14 +0000   Thu, 29 Feb 2024 00:46:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Feb 2024 00:50:14 +0000   Thu, 29 Feb 2024 00:46:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.6.238
	  Hostname:    addons-611800
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912876Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912876Ki
	  pods:               110
	System Info:
	  Machine ID:                 3f73a44f7f824b9a890ec20a939a05ef
	  System UUID:                230a9f3e-2ab5-004d-8bc9-0ff01c983ee7
	  Boot ID:                    cafa6c48-5f2f-42f6-920f-8690bf5ef390
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-6548d5df46-x7c4s      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m51s
	  default                     nginx                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16s
	  gcp-auth                    gcp-auth-5f6b4f85fd-bk6j9                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m41s
	  ingress-nginx               ingress-nginx-controller-7967645744-thqdf    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         3m45s
	  kube-system                 coredns-5dd5756b68-kt79c                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m16s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m43s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m43s
	  kube-system                 csi-hostpathplugin-cgpk8                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m43s
	  kube-system                 etcd-addons-611800                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m28s
	  kube-system                 kube-apiserver-addons-611800                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	  kube-system                 kube-controller-manager-addons-611800        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	  kube-system                 kube-proxy-qf92m                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	  kube-system                 kube-scheduler-addons-611800                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m29s
	  kube-system                 nvidia-device-plugin-daemonset-l9vxg         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m55s
	  kube-system                 snapshot-controller-58dbcc7b99-gh6xl         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 snapshot-controller-58dbcc7b99-nx4c2         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m48s
	  local-path-storage          local-path-provisioner-78b46b4d5c-h2rq5      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m47s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-g8dx5               0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     3m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             388Mi (10%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m9s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m36s (x8 over 4m36s)  kubelet          Node addons-611800 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m36s (x8 over 4m36s)  kubelet          Node addons-611800 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m36s (x7 over 4m36s)  kubelet          Node addons-611800 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m28s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m28s                  kubelet          Node addons-611800 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m28s                  kubelet          Node addons-611800 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m28s                  kubelet          Node addons-611800 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m28s                  kubelet          Node addons-611800 status is now: NodeReady
	  Normal  RegisteredNode           4m17s                  node-controller  Node addons-611800 event: Registered Node addons-611800 in Controller
	
	
	==> dmesg <==
	[ +13.951623] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.178278] kauditd_printk_skb: 5 callbacks suppressed
	[ +15.508990] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.068611] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.031679] kauditd_printk_skb: 76 callbacks suppressed
	[Feb29 00:47] hrtimer: interrupt took 1266441 ns
	[  +7.223927] kauditd_printk_skb: 115 callbacks suppressed
	[ +39.122991] kauditd_printk_skb: 4 callbacks suppressed
	[Feb29 00:48] kauditd_printk_skb: 28 callbacks suppressed
	[ +12.572433] kauditd_printk_skb: 4 callbacks suppressed
	[ +10.730124] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.041414] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.057739] kauditd_printk_skb: 52 callbacks suppressed
	[  +5.890888] kauditd_printk_skb: 10 callbacks suppressed
	[  +9.203118] kauditd_printk_skb: 2 callbacks suppressed
	[Feb29 00:49] kauditd_printk_skb: 29 callbacks suppressed
	[ +16.052873] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.119044] kauditd_printk_skb: 6 callbacks suppressed
	[  +7.141754] kauditd_printk_skb: 45 callbacks suppressed
	[  +5.006901] kauditd_printk_skb: 34 callbacks suppressed
	[ +11.798406] kauditd_printk_skb: 1 callbacks suppressed
	[  +6.264044] kauditd_printk_skb: 33 callbacks suppressed
	[Feb29 00:50] kauditd_printk_skb: 1 callbacks suppressed
	[  +6.988442] kauditd_printk_skb: 67 callbacks suppressed
	[  +6.698008] kauditd_printk_skb: 20 callbacks suppressed
	
	
	==> etcd [9551812a0f93] <==
	{"level":"warn","ts":"2024-02-29T00:48:11.899164Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-02-29T00:48:11.13055Z","time spent":"768.608879ms","remote":"127.0.0.1:48978","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":3,"response size":13505,"request content":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" "}
	{"level":"warn","ts":"2024-02-29T00:48:11.899925Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"532.418287ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10574"}
	{"level":"info","ts":"2024-02-29T00:48:11.899958Z","caller":"traceutil/trace.go:171","msg":"trace[338506644] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1010; }","duration":"532.456988ms","start":"2024-02-29T00:48:11.367494Z","end":"2024-02-29T00:48:11.899951Z","steps":["trace[338506644] 'range keys from in-memory index tree'  (duration: 532.320584ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T00:48:11.899979Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-02-29T00:48:11.367476Z","time spent":"532.49829ms","remote":"127.0.0.1:48978","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":3,"response size":10597,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"info","ts":"2024-02-29T00:48:11.939622Z","caller":"traceutil/trace.go:171","msg":"trace[2093686572] transaction","detail":"{read_only:false; response_revision:1011; number_of_response:1; }","duration":"317.850582ms","start":"2024-02-29T00:48:11.621756Z","end":"2024-02-29T00:48:11.939606Z","steps":["trace[2093686572] 'process raft request'  (duration: 317.397768ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T00:48:11.941755Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-02-29T00:48:11.621737Z","time spent":"319.186824ms","remote":"127.0.0.1:48976","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5109,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/addons-611800\" mod_revision:950 > success:<request_put:<key:\"/registry/minions/addons-611800\" value_size:5070 >> failure:<request_range:<key:\"/registry/minions/addons-611800\" > >"}
	{"level":"info","ts":"2024-02-29T00:48:16.44517Z","caller":"traceutil/trace.go:171","msg":"trace[1255273594] transaction","detail":"{read_only:false; response_revision:1025; number_of_response:1; }","duration":"108.083826ms","start":"2024-02-29T00:48:16.33707Z","end":"2024-02-29T00:48:16.445154Z","steps":["trace[1255273594] 'process raft request'  (duration: 107.281001ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T00:48:20.94158Z","caller":"traceutil/trace.go:171","msg":"trace[444979974] transaction","detail":"{read_only:false; response_revision:1045; number_of_response:1; }","duration":"113.836206ms","start":"2024-02-29T00:48:20.827727Z","end":"2024-02-29T00:48:20.941563Z","steps":["trace[444979974] 'process raft request'  (duration: 69.791611ms)","trace[444979974] 'compare'  (duration: 43.55728ms)"],"step_count":2}
	{"level":"warn","ts":"2024-02-29T00:48:21.242784Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.483448ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13482"}
	{"level":"info","ts":"2024-02-29T00:48:21.242913Z","caller":"traceutil/trace.go:171","msg":"trace[1342200995] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1045; }","duration":"121.768557ms","start":"2024-02-29T00:48:21.12111Z","end":"2024-02-29T00:48:21.242878Z","steps":["trace[1342200995] 'range keys from in-memory index tree'  (duration: 121.310543ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T00:48:21.243404Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.610659ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:82033"}
	{"level":"info","ts":"2024-02-29T00:48:21.243703Z","caller":"traceutil/trace.go:171","msg":"trace[195574159] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1045; }","duration":"138.311682ms","start":"2024-02-29T00:48:21.105245Z","end":"2024-02-29T00:48:21.243556Z","steps":["trace[195574159] 'range keys from in-memory index tree'  (duration: 137.282549ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T00:48:21.244704Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.096979ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-02-29T00:48:21.24535Z","caller":"traceutil/trace.go:171","msg":"trace[242615543] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1045; }","duration":"135.624196ms","start":"2024-02-29T00:48:21.109593Z","end":"2024-02-29T00:48:21.245218Z","steps":["trace[242615543] 'range keys from in-memory index tree'  (duration: 134.999476ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T00:48:31.843842Z","caller":"traceutil/trace.go:171","msg":"trace[1475972272] transaction","detail":"{read_only:false; response_revision:1086; number_of_response:1; }","duration":"194.850866ms","start":"2024-02-29T00:48:31.648976Z","end":"2024-02-29T00:48:31.843826Z","steps":["trace[1475972272] 'process raft request'  (duration: 194.725462ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T00:48:44.204015Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.803576ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:82464"}
	{"level":"info","ts":"2024-02-29T00:48:44.204088Z","caller":"traceutil/trace.go:171","msg":"trace[1038130216] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1168; }","duration":"104.889679ms","start":"2024-02-29T00:48:44.099187Z","end":"2024-02-29T00:48:44.204077Z","steps":["trace[1038130216] 'range keys from in-memory index tree'  (duration: 104.567568ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T00:48:58.099478Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"233.409007ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10941"}
	{"level":"info","ts":"2024-02-29T00:48:58.100088Z","caller":"traceutil/trace.go:171","msg":"trace[2103604218] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1212; }","duration":"234.001529ms","start":"2024-02-29T00:48:57.86604Z","end":"2024-02-29T00:48:58.100042Z","steps":["trace[2103604218] 'range keys from in-memory index tree'  (duration: 233.283502ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T00:49:23.196068Z","caller":"traceutil/trace.go:171","msg":"trace[1579861826] transaction","detail":"{read_only:false; response_revision:1261; number_of_response:1; }","duration":"173.753693ms","start":"2024-02-29T00:49:23.022298Z","end":"2024-02-29T00:49:23.196051Z","steps":["trace[1579861826] 'process raft request'  (duration: 173.478882ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T00:49:51.643969Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.105719ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-02-29T00:49:51.644036Z","caller":"traceutil/trace.go:171","msg":"trace[1126076808] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1407; }","duration":"108.190723ms","start":"2024-02-29T00:49:51.535833Z","end":"2024-02-29T00:49:51.644024Z","steps":["trace[1126076808] 'range keys from in-memory index tree'  (duration: 107.900911ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T00:49:51.644278Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.024814ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:3 size:8705"}
	{"level":"info","ts":"2024-02-29T00:49:51.644302Z","caller":"traceutil/trace.go:171","msg":"trace[2123503152] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:3; response_revision:1407; }","duration":"131.052715ms","start":"2024-02-29T00:49:51.513241Z","end":"2024-02-29T00:49:51.644294Z","steps":["trace[2123503152] 'range keys from in-memory index tree'  (duration: 130.892209ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T00:49:51.793942Z","caller":"traceutil/trace.go:171","msg":"trace[732918943] transaction","detail":"{read_only:false; response_revision:1408; number_of_response:1; }","duration":"144.881155ms","start":"2024-02-29T00:49:51.649042Z","end":"2024-02-29T00:49:51.793924Z","steps":["trace[732918943] 'process raft request'  (duration: 144.150727ms)"],"step_count":1}
	
	
	==> gcp-auth [de81ce929f27] <==
	2024/02/29 00:49:27 GCP Auth Webhook started!
	2024/02/29 00:49:29 Ready to marshal response ...
	2024/02/29 00:49:29 Ready to write response ...
	2024/02/29 00:49:29 Ready to marshal response ...
	2024/02/29 00:49:29 Ready to write response ...
	2024/02/29 00:49:39 Ready to marshal response ...
	2024/02/29 00:49:39 Ready to write response ...
	2024/02/29 00:49:39 Ready to marshal response ...
	2024/02/29 00:49:39 Ready to write response ...
	2024/02/29 00:49:40 Ready to marshal response ...
	2024/02/29 00:49:40 Ready to write response ...
	2024/02/29 00:49:50 Ready to marshal response ...
	2024/02/29 00:49:50 Ready to write response ...
	2024/02/29 00:50:11 Ready to marshal response ...
	2024/02/29 00:50:11 Ready to write response ...
	2024/02/29 00:50:19 Ready to marshal response ...
	2024/02/29 00:50:19 Ready to write response ...
	
	
	==> kernel <==
	 00:50:35 up 6 min,  0 users,  load average: 2.53, 2.09, 0.98
	Linux addons-611800 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [525e3de0d0f3] <==
	I0229 00:48:03.478271       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0229 00:48:11.902615       1 trace.go:236] Trace[93188381]: "List" accept:application/json, */*,audit-id:61d98246-3c6c-4b56-9c13-8aa65ace0031,client:172.19.0.1,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/gcp-auth/pods,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:LIST (29-Feb-2024 00:48:11.366) (total time: 535ms):
	Trace[93188381]: ["List(recursive=true) etcd3" audit-id:61d98246-3c6c-4b56-9c13-8aa65ace0031,key:/pods/gcp-auth,resourceVersion:,resourceVersionMatch:,limit:0,continue: 535ms (00:48:11.366)]
	Trace[93188381]: [535.860497ms] [535.860497ms] END
	I0229 00:48:11.904608       1 trace.go:236] Trace[29789119]: "List" accept:application/json, */*,audit-id:cf383837-fefd-4ded-a03f-fcd6062e6f94,client:172.19.0.1,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/kube-system/pods,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:LIST (29-Feb-2024 00:48:11.098) (total time: 804ms):
	Trace[29789119]: ["List(recursive=true) etcd3" audit-id:cf383837-fefd-4ded-a03f-fcd6062e6f94,key:/pods/kube-system,resourceVersion:,resourceVersionMatch:,limit:0,continue: 805ms (00:48:11.098)]
	Trace[29789119]: [804.320812ms] [804.320812ms] END
	I0229 00:48:11.904859       1 trace.go:236] Trace[254742903]: "List" accept:application/json, */*,audit-id:2c5a1867-5300-4f52-87c3-19cff7e4bc9d,client:172.19.0.1,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/ingress-nginx/pods,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:LIST (29-Feb-2024 00:48:11.129) (total time: 774ms):
	Trace[254742903]: ["List(recursive=true) etcd3" audit-id:2c5a1867-5300-4f52-87c3-19cff7e4bc9d,key:/pods/ingress-nginx,resourceVersion:,resourceVersionMatch:,limit:0,continue: 774ms (00:48:11.130)]
	Trace[254742903]: [774.898779ms] [774.898779ms] END
	E0229 00:48:19.039324       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.42.186:443/apis/metrics.k8s.io/v1beta1: Get "https://10.104.42.186:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.104.42.186:443: connect: connection refused
	W0229 00:48:19.040544       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 00:48:19.041091       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0229 00:48:19.110874       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0229 00:48:19.118003       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	I0229 00:48:19.137339       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0229 00:49:03.481804       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0229 00:50:00.733977       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0229 00:50:03.480394       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0229 00:50:19.616408       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0229 00:50:20.076983       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0229 00:50:20.090610       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.254.143"}
	I0229 00:50:34.761243       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0229 00:50:34.782340       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	
	
	==> kube-controller-manager [83907237e123] <==
	I0229 00:48:42.929050       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0229 00:48:42.937959       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0229 00:48:42.939222       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0229 00:48:42.974253       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0229 00:49:12.027502       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0229 00:49:12.033061       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0229 00:49:12.118848       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0229 00:49:12.126119       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0229 00:49:24.741827       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7967645744" duration="116.005µs"
	I0229 00:49:27.951094       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-5f6b4f85fd" duration="25.104851ms"
	I0229 00:49:27.951275       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-5f6b4f85fd" duration="47.502µs"
	I0229 00:49:28.931282       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
	I0229 00:49:29.041017       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0229 00:49:29.041351       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0229 00:49:29.348495       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0229 00:49:33.451570       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0229 00:49:33.451600       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0229 00:49:37.363044       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7967645744" duration="21.567139ms"
	I0229 00:49:37.363327       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7967645744" duration="77.703µs"
	I0229 00:49:38.501871       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0229 00:50:03.771594       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0229 00:50:10.301316       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0229 00:50:14.047236       1 replica_set.go:676] "Finished syncing" kind="ReplicationController" key="kube-system/registry" duration="7.2µs"
	I0229 00:50:14.395078       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/tiller-deploy-7b677967b9" duration="8µs"
	I0229 00:50:18.610977       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-69cf46c98" duration="6.3µs"
	
	
	==> kube-proxy [d909df06ae7b] <==
	I0229 00:46:25.638370       1 server_others.go:69] "Using iptables proxy"
	I0229 00:46:25.767027       1 node.go:141] Successfully retrieved node IP: 172.19.6.238
	I0229 00:46:26.044844       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0229 00:46:26.044899       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0229 00:46:26.066941       1 server_others.go:152] "Using iptables Proxier"
	I0229 00:46:26.067018       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0229 00:46:26.067315       1 server.go:846] "Version info" version="v1.28.4"
	I0229 00:46:26.067336       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 00:46:26.121111       1 config.go:188] "Starting service config controller"
	I0229 00:46:26.122759       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0229 00:46:26.122899       1 config.go:97] "Starting endpoint slice config controller"
	I0229 00:46:26.122917       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0229 00:46:26.164814       1 config.go:315] "Starting node config controller"
	I0229 00:46:26.164855       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0229 00:46:26.261330       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0229 00:46:26.261419       1 shared_informer.go:318] Caches are synced for service config
	I0229 00:46:26.269554       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [993fcd4e7a76] <==
	W0229 00:46:04.596171       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0229 00:46:04.596221       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0229 00:46:04.601193       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0229 00:46:04.601248       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0229 00:46:04.639457       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0229 00:46:04.639489       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0229 00:46:04.727080       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0229 00:46:04.728432       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0229 00:46:04.736616       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0229 00:46:04.737109       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0229 00:46:04.889636       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0229 00:46:04.889838       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0229 00:46:04.903462       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0229 00:46:04.903638       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0229 00:46:04.913178       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0229 00:46:04.913482       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0229 00:46:04.921954       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0229 00:46:04.922151       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0229 00:46:04.924389       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0229 00:46:04.924610       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0229 00:46:04.940423       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0229 00:46:04.940465       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0229 00:46:04.969318       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0229 00:46:04.969852       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0229 00:46:07.462218       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 29 00:50:32 addons-611800 kubelet[2584]: I0229 00:50:32.301118    2584 scope.go:117] "RemoveContainer" containerID="76e508e83c12f4e29f24da6b90bd4740fdd07b3590b34d7a8746625a72f0caea"
	Feb 29 00:50:32 addons-611800 kubelet[2584]: E0229 00:50:32.301929    2584 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=gadget pod=gadget-7dbzk_gadget(e1ac2e19-287b-4564-ae47-7558cfdaa585)\"" pod="gadget/gadget-7dbzk" podUID="e1ac2e19-287b-4564-ae47-7558cfdaa585"
	Feb 29 00:50:34 addons-611800 kubelet[2584]: I0229 00:50:34.860860    2584 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx" podStartSLOduration=11.520414878 podCreationTimestamp="2024-02-29 00:50:19 +0000 UTC" firstStartedPulling="2024-02-29 00:50:20.732337845 +0000 UTC m=+253.800992224" lastFinishedPulling="2024-02-29 00:50:25.072732312 +0000 UTC m=+258.141386791" observedRunningTime="2024-02-29 00:50:26.354006076 +0000 UTC m=+259.422660455" watchObservedRunningTime="2024-02-29 00:50:34.860809445 +0000 UTC m=+267.929463924"
	Feb 29 00:50:35 addons-611800 kubelet[2584]: I0229 00:50:35.173903    2584 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cgroup\" (UniqueName: \"kubernetes.io/host-path/e1ac2e19-287b-4564-ae47-7558cfdaa585-cgroup\") pod \"e1ac2e19-287b-4564-ae47-7558cfdaa585\" (UID: \"e1ac2e19-287b-4564-ae47-7558cfdaa585\") "
	Feb 29 00:50:35 addons-611800 kubelet[2584]: I0229 00:50:35.174051    2584 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"debugfs\" (UniqueName: \"kubernetes.io/host-path/e1ac2e19-287b-4564-ae47-7558cfdaa585-debugfs\") pod \"e1ac2e19-287b-4564-ae47-7558cfdaa585\" (UID: \"e1ac2e19-287b-4564-ae47-7558cfdaa585\") "
	Feb 29 00:50:35 addons-611800 kubelet[2584]: I0229 00:50:35.174079    2584 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e1ac2e19-287b-4564-ae47-7558cfdaa585-run\") pod \"e1ac2e19-287b-4564-ae47-7558cfdaa585\" (UID: \"e1ac2e19-287b-4564-ae47-7558cfdaa585\") "
	Feb 29 00:50:35 addons-611800 kubelet[2584]: I0229 00:50:35.174100    2584 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"modules\" (UniqueName: \"kubernetes.io/host-path/e1ac2e19-287b-4564-ae47-7558cfdaa585-modules\") pod \"e1ac2e19-287b-4564-ae47-7558cfdaa585\" (UID: \"e1ac2e19-287b-4564-ae47-7558cfdaa585\") "
	Feb 29 00:50:35 addons-611800 kubelet[2584]: I0229 00:50:35.174119    2584 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/e1ac2e19-287b-4564-ae47-7558cfdaa585-bpffs\") pod \"e1ac2e19-287b-4564-ae47-7558cfdaa585\" (UID: \"e1ac2e19-287b-4564-ae47-7558cfdaa585\") "
	Feb 29 00:50:35 addons-611800 kubelet[2584]: I0229 00:50:35.174169    2584 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lj692\" (UniqueName: \"kubernetes.io/projected/e1ac2e19-287b-4564-ae47-7558cfdaa585-kube-api-access-lj692\") pod \"e1ac2e19-287b-4564-ae47-7558cfdaa585\" (UID: \"e1ac2e19-287b-4564-ae47-7558cfdaa585\") "
	Feb 29 00:50:35 addons-611800 kubelet[2584]: I0229 00:50:35.174193    2584 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e1ac2e19-287b-4564-ae47-7558cfdaa585-host\") pod \"e1ac2e19-287b-4564-ae47-7558cfdaa585\" (UID: \"e1ac2e19-287b-4564-ae47-7558cfdaa585\") "
	Feb 29 00:50:35 addons-611800 kubelet[2584]: I0229 00:50:35.174382    2584 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1ac2e19-287b-4564-ae47-7558cfdaa585-host" (OuterVolumeSpecName: "host") pod "e1ac2e19-287b-4564-ae47-7558cfdaa585" (UID: "e1ac2e19-287b-4564-ae47-7558cfdaa585"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Feb 29 00:50:35 addons-611800 kubelet[2584]: I0229 00:50:35.174438    2584 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1ac2e19-287b-4564-ae47-7558cfdaa585-cgroup" (OuterVolumeSpecName: "cgroup") pod "e1ac2e19-287b-4564-ae47-7558cfdaa585" (UID: "e1ac2e19-287b-4564-ae47-7558cfdaa585"). InnerVolumeSpecName "cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Feb 29 00:50:35 addons-611800 kubelet[2584]: I0229 00:50:35.174540    2584 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1ac2e19-287b-4564-ae47-7558cfdaa585-modules" (OuterVolumeSpecName: "modules") pod "e1ac2e19-287b-4564-ae47-7558cfdaa585" (UID: "e1ac2e19-287b-4564-ae47-7558cfdaa585"). InnerVolumeSpecName "modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Feb 29 00:50:35 addons-611800 kubelet[2584]: I0229 00:50:35.174598    2584 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1ac2e19-287b-4564-ae47-7558cfdaa585-run" (OuterVolumeSpecName: "run") pod "e1ac2e19-287b-4564-ae47-7558cfdaa585" (UID: "e1ac2e19-287b-4564-ae47-7558cfdaa585"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Feb 29 00:50:35 addons-611800 kubelet[2584]: I0229 00:50:35.174619    2584 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1ac2e19-287b-4564-ae47-7558cfdaa585-bpffs" (OuterVolumeSpecName: "bpffs") pod "e1ac2e19-287b-4564-ae47-7558cfdaa585" (UID: "e1ac2e19-287b-4564-ae47-7558cfdaa585"). InnerVolumeSpecName "bpffs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Feb 29 00:50:35 addons-611800 kubelet[2584]: I0229 00:50:35.174740    2584 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1ac2e19-287b-4564-ae47-7558cfdaa585-debugfs" (OuterVolumeSpecName: "debugfs") pod "e1ac2e19-287b-4564-ae47-7558cfdaa585" (UID: "e1ac2e19-287b-4564-ae47-7558cfdaa585"). InnerVolumeSpecName "debugfs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Feb 29 00:50:35 addons-611800 kubelet[2584]: I0229 00:50:35.178090    2584 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1ac2e19-287b-4564-ae47-7558cfdaa585-kube-api-access-lj692" (OuterVolumeSpecName: "kube-api-access-lj692") pod "e1ac2e19-287b-4564-ae47-7558cfdaa585" (UID: "e1ac2e19-287b-4564-ae47-7558cfdaa585"). InnerVolumeSpecName "kube-api-access-lj692". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Feb 29 00:50:35 addons-611800 kubelet[2584]: I0229 00:50:35.275083    2584 reconciler_common.go:300] "Volume detached for volume \"modules\" (UniqueName: \"kubernetes.io/host-path/e1ac2e19-287b-4564-ae47-7558cfdaa585-modules\") on node \"addons-611800\" DevicePath \"\""
	Feb 29 00:50:35 addons-611800 kubelet[2584]: I0229 00:50:35.275129    2584 reconciler_common.go:300] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e1ac2e19-287b-4564-ae47-7558cfdaa585-run\") on node \"addons-611800\" DevicePath \"\""
	Feb 29 00:50:35 addons-611800 kubelet[2584]: I0229 00:50:35.275143    2584 reconciler_common.go:300] "Volume detached for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/e1ac2e19-287b-4564-ae47-7558cfdaa585-bpffs\") on node \"addons-611800\" DevicePath \"\""
	Feb 29 00:50:35 addons-611800 kubelet[2584]: I0229 00:50:35.275158    2584 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-lj692\" (UniqueName: \"kubernetes.io/projected/e1ac2e19-287b-4564-ae47-7558cfdaa585-kube-api-access-lj692\") on node \"addons-611800\" DevicePath \"\""
	Feb 29 00:50:35 addons-611800 kubelet[2584]: I0229 00:50:35.275169    2584 reconciler_common.go:300] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e1ac2e19-287b-4564-ae47-7558cfdaa585-host\") on node \"addons-611800\" DevicePath \"\""
	Feb 29 00:50:35 addons-611800 kubelet[2584]: I0229 00:50:35.275181    2584 reconciler_common.go:300] "Volume detached for volume \"cgroup\" (UniqueName: \"kubernetes.io/host-path/e1ac2e19-287b-4564-ae47-7558cfdaa585-cgroup\") on node \"addons-611800\" DevicePath \"\""
	Feb 29 00:50:35 addons-611800 kubelet[2584]: I0229 00:50:35.275194    2584 reconciler_common.go:300] "Volume detached for volume \"debugfs\" (UniqueName: \"kubernetes.io/host-path/e1ac2e19-287b-4564-ae47-7558cfdaa585-debugfs\") on node \"addons-611800\" DevicePath \"\""
	Feb 29 00:50:35 addons-611800 kubelet[2584]: I0229 00:50:35.636423    2584 scope.go:117] "RemoveContainer" containerID="76e508e83c12f4e29f24da6b90bd4740fdd07b3590b34d7a8746625a72f0caea"
	
	
	==> storage-provisioner [7586fd44fd78] <==
	I0229 00:46:53.001940       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0229 00:46:53.034529       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0229 00:46:53.034605       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0229 00:46:53.052909       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0229 00:46:53.053640       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-611800_85412688-1c20-44bb-865d-d4c2aab906eb!
	I0229 00:46:53.075000       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4ded6576-b5d4-4fc1-b1d7-5e7590c1c068", APIVersion:"v1", ResourceVersion:"826", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-611800_85412688-1c20-44bb-865d-d4c2aab906eb became leader
	I0229 00:46:53.253817       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-611800_85412688-1c20-44bb-865d-d4c2aab906eb!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 00:50:26.744710    5052 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-611800 -n addons-611800
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-611800 -n addons-611800: (12.8286343s)
helpers_test.go:261: (dbg) Run:  kubectl --context addons-611800 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-vwnfs ingress-nginx-admission-patch-z24xm
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-611800 describe pod ingress-nginx-admission-create-vwnfs ingress-nginx-admission-patch-z24xm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-611800 describe pod ingress-nginx-admission-create-vwnfs ingress-nginx-admission-patch-z24xm: exit status 1 (182.1551ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-vwnfs" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-z24xm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-611800 describe pod ingress-nginx-admission-create-vwnfs ingress-nginx-admission-patch-z24xm: exit status 1
--- FAIL: TestAddons/parallel/Registry (81.65s)

                                                
                                    
x
+
TestErrorSpam/setup (181.31s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-384500 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-384500 --driver=hyperv
E0229 00:54:28.495414    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
E0229 00:54:28.510922    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
E0229 00:54:28.526055    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
E0229 00:54:28.557800    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
E0229 00:54:28.605119    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
E0229 00:54:28.698711    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
E0229 00:54:28.871657    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
E0229 00:54:29.205429    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
E0229 00:54:29.852951    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
E0229 00:54:31.141799    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
E0229 00:54:33.716577    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
E0229 00:54:38.852153    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
E0229 00:54:49.103525    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
E0229 00:55:09.596605    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
E0229 00:55:50.571954    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-384500 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-384500 --driver=hyperv: (3m1.3040323s)
error_spam_test.go:96: unexpected stderr: "W0229 00:53:32.467156    7420 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube5\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."
error_spam_test.go:110: minikube stdout:
* [nospam-384500] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
- KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
- MINIKUBE_LOCATION=18063
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the hyperv driver based on user configuration
* Starting control plane node nospam-384500 in cluster nospam-384500
* Creating hyperv VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-384500" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
W0229 00:53:32.467156    7420 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
--- FAIL: TestErrorSpam/setup (181.31s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (211.35s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-583600 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv
E0229 00:59:28.509055    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
E0229 00:59:56.364182    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
functional_test.go:2230: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-583600 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv: exit status 90 (3m19.9978847s)

                                                
                                                
-- stdout --
	* [functional-583600] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18063
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting control plane node functional-583600 in cluster functional-583600
	* Creating hyperv VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	* Found network options:
	  - HTTP_PROXY=localhost:64473
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	  - HTTP_PROXY=localhost:64473
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 00:59:08.180332    4352 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:64473 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:64473 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:64473 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:64473 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (172.19.5.240).
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Feb 29 01:00:57 functional-583600 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 01:00:57 functional-583600 dockerd[656]: time="2024-02-29T01:00:57.270724284Z" level=info msg="Starting up"
	Feb 29 01:00:57 functional-583600 dockerd[656]: time="2024-02-29T01:00:57.271600588Z" level=info msg="containerd not running, starting managed containerd"
	Feb 29 01:00:57 functional-583600 dockerd[656]: time="2024-02-29T01:00:57.272822672Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=662
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.302335125Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.330394940Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.330541874Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.330606589Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.330622893Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.330711314Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.330734119Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.331015184Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.331111006Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.331130811Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.331141813Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.331235735Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.331672837Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.334892184Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.335058023Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.335218960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.335519930Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.335716776Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.335997141Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.336114268Z" level=info msg="metadata content store policy set" policy=shared
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.345670587Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.345723999Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.345743804Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.345761908Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.345778412Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.345976858Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.346887370Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347090617Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347203343Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347224248Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347240252Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347255955Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347271659Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347286962Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347303566Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347323871Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347337874Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347351277Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347376483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347394387Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347408691Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347453601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347469705Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347483808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347499312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347514115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347528619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347634443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347732966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347749870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347771375Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347796081Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347827288Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347935413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347956718Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.348086248Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.348121656Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.348138660Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.348150463Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.348327504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.348417125Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.348468137Z" level=info msg="NRI interface is disabled by configuration."
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.348924843Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.349067876Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.349126390Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.349152196Z" level=info msg="containerd successfully booted in 0.047945s"
	Feb 29 01:00:57 functional-583600 dockerd[656]: time="2024-02-29T01:00:57.381075108Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 29 01:00:57 functional-583600 dockerd[656]: time="2024-02-29T01:00:57.394008111Z" level=info msg="Loading containers: start."
	Feb 29 01:00:57 functional-583600 dockerd[656]: time="2024-02-29T01:00:57.646849219Z" level=info msg="Loading containers: done."
	Feb 29 01:00:57 functional-583600 dockerd[656]: time="2024-02-29T01:00:57.663366278Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Feb 29 01:00:57 functional-583600 dockerd[656]: time="2024-02-29T01:00:57.663775373Z" level=info msg="Daemon has completed initialization"
	Feb 29 01:00:57 functional-583600 dockerd[656]: time="2024-02-29T01:00:57.715844539Z" level=info msg="API listen on [::]:2376"
	Feb 29 01:00:57 functional-583600 dockerd[656]: time="2024-02-29T01:00:57.715977470Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 29 01:00:57 functional-583600 systemd[1]: Started Docker Application Container Engine.
	Feb 29 01:01:27 functional-583600 systemd[1]: Stopping Docker Application Container Engine...
	Feb 29 01:01:27 functional-583600 dockerd[656]: time="2024-02-29T01:01:27.082774000Z" level=info msg="Processing signal 'terminated'"
	Feb 29 01:01:27 functional-583600 dockerd[656]: time="2024-02-29T01:01:27.084438866Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Feb 29 01:01:27 functional-583600 dockerd[656]: time="2024-02-29T01:01:27.084969887Z" level=info msg="Daemon shutdown complete"
	Feb 29 01:01:27 functional-583600 dockerd[656]: time="2024-02-29T01:01:27.085649314Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Feb 29 01:01:27 functional-583600 dockerd[656]: time="2024-02-29T01:01:27.085809320Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Feb 29 01:01:28 functional-583600 systemd[1]: docker.service: Deactivated successfully.
	Feb 29 01:01:28 functional-583600 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 01:01:28 functional-583600 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 01:01:28 functional-583600 dockerd[999]: time="2024-02-29T01:01:28.160307636Z" level=info msg="Starting up"
	Feb 29 01:02:28 functional-583600 dockerd[999]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Feb 29 01:02:28 functional-583600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Feb 29 01:02:28 functional-583600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Feb 29 01:02:28 functional-583600 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2232: failed minikube start. args "out/minikube-windows-amd64.exe start -p functional-583600 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv": exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-583600 -n functional-583600
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-583600 -n functional-583600: exit status 6 (11.3434215s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 01:02:28.197372    2580 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0229 01:02:39.383393    2580 status.go:415] kubeconfig endpoint: extract IP: "functional-583600" does not appear in C:\Users\jenkins.minikube5\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "functional-583600" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestFunctional/serial/StartWithProxy (211.35s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (180.84s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-583600 --alsologtostderr -v=8
E0229 01:04:28.520011    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
functional_test.go:655: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-583600 --alsologtostderr -v=8: exit status 90 (2m49.3479815s)

                                                
                                                
-- stdout --
	* [functional-583600] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18063
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting control plane node functional-583600 in cluster functional-583600
	* Updating the running hyperv "functional-583600" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 01:02:39.536763   13184 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0229 01:02:39.588787   13184 out.go:291] Setting OutFile to fd 840 ...
	I0229 01:02:39.589113   13184 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:02:39.589113   13184 out.go:304] Setting ErrFile to fd 516...
	I0229 01:02:39.589113   13184 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:02:39.611049   13184 out.go:298] Setting JSON to false
	I0229 01:02:39.614142   13184 start.go:129] hostinfo: {"hostname":"minikube5","uptime":264786,"bootTime":1708903772,"procs":201,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0229 01:02:39.614241   13184 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 01:02:39.615416   13184 out.go:177] * [functional-583600] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0229 01:02:39.615966   13184 notify.go:220] Checking for updates...
	I0229 01:02:39.615966   13184 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 01:02:39.616703   13184 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 01:02:39.617458   13184 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0229 01:02:39.617458   13184 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 01:02:39.618132   13184 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 01:02:39.619636   13184 config.go:182] Loaded profile config "functional-583600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 01:02:39.619901   13184 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 01:02:44.658842   13184 out.go:177] * Using the hyperv driver based on existing profile
	I0229 01:02:44.659553   13184 start.go:299] selected driver: hyperv
	I0229 01:02:44.659553   13184 start.go:903] validating driver "hyperv" against &{Name:functional-583600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.4 ClusterName:functional-583600 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:172.19.5.240 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 01:02:44.659553   13184 start.go:914] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 01:02:44.704873   13184 cni.go:84] Creating CNI manager for ""
	I0229 01:02:44.704873   13184 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0229 01:02:44.704873   13184 start_flags.go:323] config:
	{Name:functional-583600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-583600 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:172.19.5.240 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 01:02:44.705594   13184 iso.go:125] acquiring lock: {Name:mk91f2ee29fbed5605669750e8cfa308a1229357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 01:02:44.706922   13184 out.go:177] * Starting control plane node functional-583600 in cluster functional-583600
	I0229 01:02:44.706922   13184 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 01:02:44.706922   13184 preload.go:148] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0229 01:02:44.706922   13184 cache.go:56] Caching tarball of preloaded images
	I0229 01:02:44.707909   13184 preload.go:174] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 01:02:44.708100   13184 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0229 01:02:44.708281   13184 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-583600\config.json ...
	I0229 01:02:44.710260   13184 start.go:365] acquiring machines lock for functional-583600: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 01:02:44.710435   13184 start.go:369] acquired machines lock for "functional-583600" in 114.6µs
	I0229 01:02:44.710610   13184 start.go:96] Skipping create...Using existing machine configuration
	I0229 01:02:44.710610   13184 fix.go:54] fixHost starting: 
	I0229 01:02:44.711130   13184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-583600 ).state
	I0229 01:02:47.354064   13184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 01:02:47.354064   13184 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:02:47.354064   13184 fix.go:102] recreateIfNeeded on functional-583600: state=Running err=<nil>
	W0229 01:02:47.354064   13184 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 01:02:47.354688   13184 out.go:177] * Updating the running hyperv "functional-583600" VM ...
	I0229 01:02:47.355271   13184 machine.go:88] provisioning docker machine ...
	I0229 01:02:47.355271   13184 buildroot.go:166] provisioning hostname "functional-583600"
	I0229 01:02:47.355271   13184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-583600 ).state
	I0229 01:02:49.407626   13184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 01:02:49.407678   13184 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:02:49.407678   13184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-583600 ).networkadapters[0]).ipaddresses[0]
	I0229 01:02:51.829620   13184 main.go:141] libmachine: [stdout =====>] : 172.19.5.240
	
	I0229 01:02:51.830098   13184 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:02:51.833716   13184 main.go:141] libmachine: Using SSH client type: native
	I0229 01:02:51.834316   13184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.5.240 22 <nil> <nil>}
	I0229 01:02:51.834316   13184 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-583600 && echo "functional-583600" | sudo tee /etc/hostname
	I0229 01:02:51.994232   13184 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-583600
	
	I0229 01:02:51.994336   13184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-583600 ).state
	I0229 01:02:54.039644   13184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 01:02:54.039644   13184 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:02:54.039644   13184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-583600 ).networkadapters[0]).ipaddresses[0]
	I0229 01:02:56.491837   13184 main.go:141] libmachine: [stdout =====>] : 172.19.5.240
	
	I0229 01:02:56.491837   13184 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:02:56.496149   13184 main.go:141] libmachine: Using SSH client type: native
	I0229 01:02:56.496149   13184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.5.240 22 <nil> <nil>}
	I0229 01:02:56.496678   13184 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-583600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-583600/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-583600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 01:02:56.623110   13184 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 01:02:56.623110   13184 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0229 01:02:56.623232   13184 buildroot.go:174] setting up certificates
	I0229 01:02:56.623232   13184 provision.go:83] configureAuth start
	I0229 01:02:56.623232   13184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-583600 ).state
	I0229 01:02:58.624297   13184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 01:02:58.624297   13184 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:02:58.624810   13184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-583600 ).networkadapters[0]).ipaddresses[0]
	I0229 01:03:01.072851   13184 main.go:141] libmachine: [stdout =====>] : 172.19.5.240
	
	I0229 01:03:01.072851   13184 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:03:01.072851   13184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-583600 ).state
	I0229 01:03:03.041742   13184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 01:03:03.041742   13184 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:03:03.041742   13184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-583600 ).networkadapters[0]).ipaddresses[0]
	I0229 01:03:05.440928   13184 main.go:141] libmachine: [stdout =====>] : 172.19.5.240
	
	I0229 01:03:05.440928   13184 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:03:05.440928   13184 provision.go:138] copyHostCerts
	I0229 01:03:05.441886   13184 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0229 01:03:05.442157   13184 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0229 01:03:05.442157   13184 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0229 01:03:05.442520   13184 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0229 01:03:05.443501   13184 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0229 01:03:05.443579   13184 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0229 01:03:05.443579   13184 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0229 01:03:05.443579   13184 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0229 01:03:05.444552   13184 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0229 01:03:05.444552   13184 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0229 01:03:05.444552   13184 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0229 01:03:05.444552   13184 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1675 bytes)
	I0229 01:03:05.445350   13184 provision.go:112] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-583600 san=[172.19.5.240 172.19.5.240 localhost 127.0.0.1 minikube functional-583600]
	I0229 01:03:05.651675   13184 provision.go:172] copyRemoteCerts
	I0229 01:03:05.660199   13184 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 01:03:05.661202   13184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-583600 ).state
	I0229 01:03:07.679344   13184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 01:03:07.679709   13184 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:03:07.679785   13184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-583600 ).networkadapters[0]).ipaddresses[0]
	I0229 01:03:10.106739   13184 main.go:141] libmachine: [stdout =====>] : 172.19.5.240
	
	I0229 01:03:10.106890   13184 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:03:10.107248   13184 sshutil.go:53] new ssh client: &{IP:172.19.5.240 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-583600\id_rsa Username:docker}
	I0229 01:03:10.223622   13184 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5621648s)
	I0229 01:03:10.223622   13184 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0229 01:03:10.223622   13184 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 01:03:10.268427   13184 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0229 01:03:10.269000   13184 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0229 01:03:10.314777   13184 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0229 01:03:10.315090   13184 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 01:03:10.360962   13184 provision.go:86] duration metric: configureAuth took 13.7369606s
	I0229 01:03:10.360962   13184 buildroot.go:189] setting minikube options for container-runtime
	I0229 01:03:10.361872   13184 config.go:182] Loaded profile config "functional-583600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 01:03:10.361983   13184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-583600 ).state
	I0229 01:03:12.379674   13184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 01:03:12.380164   13184 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:03:12.380164   13184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-583600 ).networkadapters[0]).ipaddresses[0]
	I0229 01:03:14.777703   13184 main.go:141] libmachine: [stdout =====>] : 172.19.5.240
	
	I0229 01:03:14.777703   13184 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:03:14.781961   13184 main.go:141] libmachine: Using SSH client type: native
	I0229 01:03:14.782352   13184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.5.240 22 <nil> <nil>}
	I0229 01:03:14.782352   13184 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 01:03:14.914758   13184 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0229 01:03:14.914758   13184 buildroot.go:70] root file system type: tmpfs
	I0229 01:03:14.914758   13184 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 01:03:14.914758   13184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-583600 ).state
	I0229 01:03:16.922082   13184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 01:03:16.922412   13184 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:03:16.922506   13184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-583600 ).networkadapters[0]).ipaddresses[0]
	I0229 01:03:19.340168   13184 main.go:141] libmachine: [stdout =====>] : 172.19.5.240
	
	I0229 01:03:19.340168   13184 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:03:19.344047   13184 main.go:141] libmachine: Using SSH client type: native
	I0229 01:03:19.344660   13184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.5.240 22 <nil> <nil>}
	I0229 01:03:19.344660   13184 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 01:03:19.498714   13184 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 01:03:19.498714   13184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-583600 ).state
	I0229 01:03:21.506370   13184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 01:03:21.506370   13184 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:03:21.506370   13184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-583600 ).networkadapters[0]).ipaddresses[0]
	I0229 01:03:23.863375   13184 main.go:141] libmachine: [stdout =====>] : 172.19.5.240
	
	I0229 01:03:23.863375   13184 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:03:23.867628   13184 main.go:141] libmachine: Using SSH client type: native
	I0229 01:03:23.868237   13184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.5.240 22 <nil> <nil>}
	I0229 01:03:23.868237   13184 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 01:03:24.013707   13184 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 01:03:24.013707   13184 machine.go:91] provisioned docker machine in 36.6563835s
	I0229 01:03:24.013707   13184 start.go:300] post-start starting for "functional-583600" (driver="hyperv")
	I0229 01:03:24.013707   13184 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 01:03:24.022880   13184 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 01:03:24.022880   13184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-583600 ).state
	I0229 01:03:26.072620   13184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 01:03:26.072620   13184 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:03:26.072620   13184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-583600 ).networkadapters[0]).ipaddresses[0]
	I0229 01:03:28.484570   13184 main.go:141] libmachine: [stdout =====>] : 172.19.5.240
	
	I0229 01:03:28.485246   13184 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:03:28.485364   13184 sshutil.go:53] new ssh client: &{IP:172.19.5.240 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-583600\id_rsa Username:docker}
	I0229 01:03:28.592891   13184 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5696505s)
	I0229 01:03:28.602163   13184 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 01:03:28.610014   13184 command_runner.go:130] > NAME=Buildroot
	I0229 01:03:28.610100   13184 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0229 01:03:28.610100   13184 command_runner.go:130] > ID=buildroot
	I0229 01:03:28.610100   13184 command_runner.go:130] > VERSION_ID=2023.02.9
	I0229 01:03:28.610100   13184 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0229 01:03:28.610268   13184 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 01:03:28.610370   13184 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0229 01:03:28.610911   13184 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0229 01:03:28.611701   13184 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem -> 33122.pem in /etc/ssl/certs
	I0229 01:03:28.611701   13184 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem -> /etc/ssl/certs/33122.pem
	I0229 01:03:28.611701   13184 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\test\nested\copy\3312\hosts -> hosts in /etc/test/nested/copy/3312
	I0229 01:03:28.611701   13184 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\test\nested\copy\3312\hosts -> /etc/test/nested/copy/3312/hosts
	I0229 01:03:28.622699   13184 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/3312
	I0229 01:03:28.643196   13184 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem --> /etc/ssl/certs/33122.pem (1708 bytes)
	I0229 01:03:28.689246   13184 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\test\nested\copy\3312\hosts --> /etc/test/nested/copy/3312/hosts (40 bytes)
	I0229 01:03:28.737804   13184 start.go:303] post-start completed in 4.7238322s
	I0229 01:03:28.737944   13184 fix.go:56] fixHost completed within 44.0248681s
	I0229 01:03:28.737999   13184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-583600 ).state
	I0229 01:03:30.753913   13184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 01:03:30.754156   13184 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:03:30.754156   13184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-583600 ).networkadapters[0]).ipaddresses[0]
	I0229 01:03:33.173934   13184 main.go:141] libmachine: [stdout =====>] : 172.19.5.240
	
	I0229 01:03:33.173934   13184 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:03:33.178890   13184 main.go:141] libmachine: Using SSH client type: native
	I0229 01:03:33.179493   13184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.5.240 22 <nil> <nil>}
	I0229 01:03:33.179493   13184 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0229 01:03:33.309403   13184 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709168613.481654338
	
	I0229 01:03:33.309555   13184 fix.go:206] guest clock: 1709168613.481654338
	I0229 01:03:33.309555   13184 fix.go:219] Guest: 2024-02-29 01:03:33.481654338 +0000 UTC Remote: 2024-02-29 01:03:28.7379441 +0000 UTC m=+49.286202401 (delta=4.743710238s)
	I0229 01:03:33.309682   13184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-583600 ).state
	I0229 01:03:35.344695   13184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 01:03:35.345646   13184 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:03:35.345742   13184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-583600 ).networkadapters[0]).ipaddresses[0]
	I0229 01:03:37.766101   13184 main.go:141] libmachine: [stdout =====>] : 172.19.5.240
	
	I0229 01:03:37.766101   13184 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:03:37.771852   13184 main.go:141] libmachine: Using SSH client type: native
	I0229 01:03:37.771852   13184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.5.240 22 <nil> <nil>}
	I0229 01:03:37.772375   13184 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709168613
	I0229 01:03:37.913830   13184 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Feb 29 01:03:33 UTC 2024
	
	I0229 01:03:37.913830   13184 fix.go:226] clock set: Thu Feb 29 01:03:33 UTC 2024
	 (err=<nil>)
	I0229 01:03:37.913830   13184 start.go:83] releasing machines lock for "functional-583600", held for 53.2003598s
	I0229 01:03:37.914401   13184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-583600 ).state
	I0229 01:03:39.931270   13184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 01:03:39.931270   13184 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:03:39.931345   13184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-583600 ).networkadapters[0]).ipaddresses[0]
	I0229 01:03:42.338065   13184 main.go:141] libmachine: [stdout =====>] : 172.19.5.240
	
	I0229 01:03:42.338065   13184 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:03:42.342111   13184 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 01:03:42.342293   13184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-583600 ).state
	I0229 01:03:42.349513   13184 ssh_runner.go:195] Run: cat /version.json
	I0229 01:03:42.350594   13184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-583600 ).state
	I0229 01:03:44.395322   13184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 01:03:44.395322   13184 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:03:44.395322   13184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-583600 ).networkadapters[0]).ipaddresses[0]
	I0229 01:03:44.395322   13184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 01:03:44.395810   13184 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:03:44.395906   13184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-583600 ).networkadapters[0]).ipaddresses[0]
	I0229 01:03:46.867426   13184 main.go:141] libmachine: [stdout =====>] : 172.19.5.240
	
	I0229 01:03:46.867719   13184 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:03:46.868005   13184 sshutil.go:53] new ssh client: &{IP:172.19.5.240 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-583600\id_rsa Username:docker}
	I0229 01:03:46.891650   13184 main.go:141] libmachine: [stdout =====>] : 172.19.5.240
	
	I0229 01:03:46.892629   13184 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:03:46.893087   13184 sshutil.go:53] new ssh client: &{IP:172.19.5.240 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-583600\id_rsa Username:docker}
	I0229 01:03:47.034309   13184 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0229 01:03:47.035001   13184 command_runner.go:130] > {"iso_version": "v1.32.1-1708638130-18020", "kicbase_version": "v0.0.42-1708008208-17936", "minikube_version": "v1.32.0", "commit": "d80143d2abd5a004b09b48bbc118a104326900af"}
	I0229 01:03:47.035001   13184 ssh_runner.go:235] Completed: cat /version.json: (4.6842114s)
	I0229 01:03:47.035175   13184 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.6926545s)
	I0229 01:03:47.044187   13184 ssh_runner.go:195] Run: systemctl --version
	I0229 01:03:47.053997   13184 command_runner.go:130] > systemd 252 (252)
	I0229 01:03:47.053997   13184 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0229 01:03:47.065527   13184 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0229 01:03:47.073661   13184 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0229 01:03:47.074297   13184 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 01:03:47.083184   13184 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 01:03:47.100136   13184 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0229 01:03:47.100173   13184 start.go:475] detecting cgroup driver to use...
	I0229 01:03:47.100173   13184 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 01:03:47.135004   13184 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0229 01:03:47.144183   13184 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0229 01:03:47.173831   13184 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 01:03:47.193286   13184 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 01:03:47.201781   13184 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 01:03:47.233649   13184 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 01:03:47.264538   13184 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 01:03:47.291934   13184 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 01:03:47.324210   13184 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 01:03:47.354326   13184 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 01:03:47.383677   13184 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 01:03:47.403610   13184 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0229 01:03:47.412504   13184 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 01:03:47.444479   13184 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 01:03:47.650596   13184 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 01:03:47.682277   13184 start.go:475] detecting cgroup driver to use...
	I0229 01:03:47.693925   13184 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 01:03:47.716663   13184 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0229 01:03:47.718251   13184 command_runner.go:130] > [Unit]
	I0229 01:03:47.718251   13184 command_runner.go:130] > Description=Docker Application Container Engine
	I0229 01:03:47.718251   13184 command_runner.go:130] > Documentation=https://docs.docker.com
	I0229 01:03:47.718251   13184 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0229 01:03:47.718251   13184 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0229 01:03:47.718251   13184 command_runner.go:130] > StartLimitBurst=3
	I0229 01:03:47.718251   13184 command_runner.go:130] > StartLimitIntervalSec=60
	I0229 01:03:47.718251   13184 command_runner.go:130] > [Service]
	I0229 01:03:47.718251   13184 command_runner.go:130] > Type=notify
	I0229 01:03:47.718251   13184 command_runner.go:130] > Restart=on-failure
	I0229 01:03:47.718251   13184 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0229 01:03:47.718251   13184 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0229 01:03:47.718251   13184 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0229 01:03:47.718251   13184 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0229 01:03:47.718251   13184 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0229 01:03:47.718251   13184 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0229 01:03:47.718251   13184 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0229 01:03:47.718564   13184 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0229 01:03:47.718564   13184 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0229 01:03:47.718564   13184 command_runner.go:130] > ExecStart=
	I0229 01:03:47.718564   13184 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0229 01:03:47.718651   13184 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0229 01:03:47.718651   13184 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0229 01:03:47.718651   13184 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0229 01:03:47.718711   13184 command_runner.go:130] > LimitNOFILE=infinity
	I0229 01:03:47.718711   13184 command_runner.go:130] > LimitNPROC=infinity
	I0229 01:03:47.718711   13184 command_runner.go:130] > LimitCORE=infinity
	I0229 01:03:47.718757   13184 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0229 01:03:47.718757   13184 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0229 01:03:47.718757   13184 command_runner.go:130] > TasksMax=infinity
	I0229 01:03:47.718787   13184 command_runner.go:130] > TimeoutStartSec=0
	I0229 01:03:47.718787   13184 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0229 01:03:47.718831   13184 command_runner.go:130] > Delegate=yes
	I0229 01:03:47.718831   13184 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0229 01:03:47.718831   13184 command_runner.go:130] > KillMode=process
	I0229 01:03:47.718831   13184 command_runner.go:130] > [Install]
	I0229 01:03:47.718831   13184 command_runner.go:130] > WantedBy=multi-user.target
	I0229 01:03:47.730126   13184 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 01:03:47.758634   13184 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 01:03:47.799700   13184 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 01:03:47.833621   13184 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 01:03:47.860373   13184 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 01:03:47.895679   13184 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0229 01:03:47.905214   13184 ssh_runner.go:195] Run: which cri-dockerd
	I0229 01:03:47.910931   13184 command_runner.go:130] > /usr/bin/cri-dockerd
	I0229 01:03:47.924315   13184 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 01:03:47.941930   13184 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0229 01:03:47.983624   13184 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 01:03:48.185294   13184 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 01:03:48.368319   13184 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 01:03:48.368319   13184 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 01:03:48.418633   13184 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 01:03:48.617439   13184 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 01:05:28.695843   13184 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0229 01:05:28.696359   13184 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0229 01:05:28.696359   13184 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m40.0733159s)
	I0229 01:05:28.707690   13184 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0229 01:05:28.731281   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 systemd[1]: Starting Docker Application Container Engine...
	I0229 01:05:28.731281   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[656]: time="2024-02-29T01:00:57.270724284Z" level=info msg="Starting up"
	I0229 01:05:28.731281   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[656]: time="2024-02-29T01:00:57.271600588Z" level=info msg="containerd not running, starting managed containerd"
	I0229 01:05:28.731281   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[656]: time="2024-02-29T01:00:57.272822672Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=662
	I0229 01:05:28.731437   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.302335125Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	I0229 01:05:28.731437   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.330394940Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0229 01:05:28.731437   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.330541874Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0229 01:05:28.731508   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.330606589Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0229 01:05:28.731508   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.330622893Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0229 01:05:28.731553   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.330711314Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0229 01:05:28.731577   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.330734119Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0229 01:05:28.731577   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.331015184Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0229 01:05:28.731641   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.331111006Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0229 01:05:28.731641   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.331130811Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0229 01:05:28.731704   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.331141813Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0229 01:05:28.731704   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.331235735Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0229 01:05:28.731704   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.331672837Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0229 01:05:28.731704   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.334892184Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0229 01:05:28.731704   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.335058023Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0229 01:05:28.731820   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.335218960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0229 01:05:28.731820   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.335519930Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0229 01:05:28.731820   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.335716776Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0229 01:05:28.731909   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.335997141Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0229 01:05:28.731964   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.336114268Z" level=info msg="metadata content store policy set" policy=shared
	I0229 01:05:28.731964   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.345670587Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0229 01:05:28.732009   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.345723999Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0229 01:05:28.732009   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.345743804Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0229 01:05:28.732009   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.345761908Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0229 01:05:28.732009   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.345778412Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0229 01:05:28.732009   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.345976858Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0229 01:05:28.732121   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.346887370Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0229 01:05:28.732121   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347090617Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0229 01:05:28.732239   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347203343Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0229 01:05:28.732239   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347224248Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0229 01:05:28.732239   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347240252Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0229 01:05:28.732239   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347255955Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0229 01:05:28.732341   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347271659Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0229 01:05:28.732341   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347286962Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0229 01:05:28.732341   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347303566Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0229 01:05:28.732341   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347323871Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0229 01:05:28.732449   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347337874Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0229 01:05:28.732449   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347351277Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0229 01:05:28.732449   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347376483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0229 01:05:28.732449   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347394387Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0229 01:05:28.732449   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347408691Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0229 01:05:28.732559   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347453601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0229 01:05:28.732559   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347469705Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0229 01:05:28.732559   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347483808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0229 01:05:28.732559   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347499312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0229 01:05:28.732559   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347514115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0229 01:05:28.732667   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347528619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0229 01:05:28.732667   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347634443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0229 01:05:28.732667   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347732966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0229 01:05:28.732741   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347749870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0229 01:05:28.732741   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347771375Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0229 01:05:28.732741   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347796081Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0229 01:05:28.732805   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347827288Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0229 01:05:28.732805   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347935413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0229 01:05:28.732805   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347956718Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0229 01:05:28.732805   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.348086248Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0229 01:05:28.732805   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.348121656Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0229 01:05:28.732805   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.348138660Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0229 01:05:28.732805   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.348150463Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0229 01:05:28.732805   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.348327504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0229 01:05:28.732805   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.348417125Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0229 01:05:28.732805   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.348468137Z" level=info msg="NRI interface is disabled by configuration."
	I0229 01:05:28.733020   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.348924843Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0229 01:05:28.733020   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.349067876Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0229 01:05:28.733020   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.349126390Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0229 01:05:28.733020   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.349152196Z" level=info msg="containerd successfully booted in 0.047945s"
	I0229 01:05:28.733020   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[656]: time="2024-02-29T01:00:57.381075108Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0229 01:05:28.733020   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[656]: time="2024-02-29T01:00:57.394008111Z" level=info msg="Loading containers: start."
	I0229 01:05:28.733020   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[656]: time="2024-02-29T01:00:57.646849219Z" level=info msg="Loading containers: done."
	I0229 01:05:28.733124   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[656]: time="2024-02-29T01:00:57.663366278Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	I0229 01:05:28.733124   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[656]: time="2024-02-29T01:00:57.663775373Z" level=info msg="Daemon has completed initialization"
	I0229 01:05:28.733124   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[656]: time="2024-02-29T01:00:57.715844539Z" level=info msg="API listen on [::]:2376"
	I0229 01:05:28.733124   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 dockerd[656]: time="2024-02-29T01:00:57.715977470Z" level=info msg="API listen on /var/run/docker.sock"
	I0229 01:05:28.733124   13184 command_runner.go:130] > Feb 29 01:00:57 functional-583600 systemd[1]: Started Docker Application Container Engine.
	I0229 01:05:28.733124   13184 command_runner.go:130] > Feb 29 01:01:27 functional-583600 systemd[1]: Stopping Docker Application Container Engine...
	I0229 01:05:28.733124   13184 command_runner.go:130] > Feb 29 01:01:27 functional-583600 dockerd[656]: time="2024-02-29T01:01:27.082774000Z" level=info msg="Processing signal 'terminated'"
	I0229 01:05:28.733124   13184 command_runner.go:130] > Feb 29 01:01:27 functional-583600 dockerd[656]: time="2024-02-29T01:01:27.084438866Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0229 01:05:28.733124   13184 command_runner.go:130] > Feb 29 01:01:27 functional-583600 dockerd[656]: time="2024-02-29T01:01:27.084969887Z" level=info msg="Daemon shutdown complete"
	I0229 01:05:28.733124   13184 command_runner.go:130] > Feb 29 01:01:27 functional-583600 dockerd[656]: time="2024-02-29T01:01:27.085649314Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0229 01:05:28.733267   13184 command_runner.go:130] > Feb 29 01:01:27 functional-583600 dockerd[656]: time="2024-02-29T01:01:27.085809320Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0229 01:05:28.733267   13184 command_runner.go:130] > Feb 29 01:01:28 functional-583600 systemd[1]: docker.service: Deactivated successfully.
	I0229 01:05:28.733267   13184 command_runner.go:130] > Feb 29 01:01:28 functional-583600 systemd[1]: Stopped Docker Application Container Engine.
	I0229 01:05:28.733267   13184 command_runner.go:130] > Feb 29 01:01:28 functional-583600 systemd[1]: Starting Docker Application Container Engine...
	I0229 01:05:28.733267   13184 command_runner.go:130] > Feb 29 01:01:28 functional-583600 dockerd[999]: time="2024-02-29T01:01:28.160307636Z" level=info msg="Starting up"
	I0229 01:05:28.733267   13184 command_runner.go:130] > Feb 29 01:02:28 functional-583600 dockerd[999]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0229 01:05:28.733380   13184 command_runner.go:130] > Feb 29 01:02:28 functional-583600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0229 01:05:28.733380   13184 command_runner.go:130] > Feb 29 01:02:28 functional-583600 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0229 01:05:28.733479   13184 command_runner.go:130] > Feb 29 01:02:28 functional-583600 systemd[1]: Failed to start Docker Application Container Engine.
	I0229 01:05:28.733479   13184 command_runner.go:130] > Feb 29 01:02:28 functional-583600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
	I0229 01:05:28.733479   13184 command_runner.go:130] > Feb 29 01:02:28 functional-583600 systemd[1]: Stopped Docker Application Container Engine.
	I0229 01:05:28.733479   13184 command_runner.go:130] > Feb 29 01:02:28 functional-583600 systemd[1]: Starting Docker Application Container Engine...
	I0229 01:05:28.733479   13184 command_runner.go:130] > Feb 29 01:02:28 functional-583600 dockerd[1010]: time="2024-02-29T01:02:28.574763226Z" level=info msg="Starting up"
	I0229 01:05:28.733479   13184 command_runner.go:130] > Feb 29 01:03:28 functional-583600 dockerd[1010]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0229 01:05:28.733479   13184 command_runner.go:130] > Feb 29 01:03:28 functional-583600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0229 01:05:28.733577   13184 command_runner.go:130] > Feb 29 01:03:28 functional-583600 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0229 01:05:28.733577   13184 command_runner.go:130] > Feb 29 01:03:28 functional-583600 systemd[1]: Failed to start Docker Application Container Engine.
	I0229 01:05:28.733577   13184 command_runner.go:130] > Feb 29 01:03:28 functional-583600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	I0229 01:05:28.733577   13184 command_runner.go:130] > Feb 29 01:03:28 functional-583600 systemd[1]: Stopped Docker Application Container Engine.
	I0229 01:05:28.733577   13184 command_runner.go:130] > Feb 29 01:03:28 functional-583600 systemd[1]: Starting Docker Application Container Engine...
	I0229 01:05:28.733577   13184 command_runner.go:130] > Feb 29 01:03:28 functional-583600 dockerd[1223]: time="2024-02-29T01:03:28.781864216Z" level=info msg="Starting up"
	I0229 01:05:28.733577   13184 command_runner.go:130] > Feb 29 01:03:48 functional-583600 dockerd[1223]: time="2024-02-29T01:03:48.815476282Z" level=info msg="Processing signal 'terminated'"
	I0229 01:05:28.733679   13184 command_runner.go:130] > Feb 29 01:04:28 functional-583600 dockerd[1223]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0229 01:05:28.733679   13184 command_runner.go:130] > Feb 29 01:04:28 functional-583600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0229 01:05:28.733679   13184 command_runner.go:130] > Feb 29 01:04:28 functional-583600 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0229 01:05:28.733679   13184 command_runner.go:130] > Feb 29 01:04:28 functional-583600 systemd[1]: Stopped Docker Application Container Engine.
	I0229 01:05:28.733679   13184 command_runner.go:130] > Feb 29 01:04:28 functional-583600 systemd[1]: Starting Docker Application Container Engine...
	I0229 01:05:28.733768   13184 command_runner.go:130] > Feb 29 01:04:28 functional-583600 dockerd[1418]: time="2024-02-29T01:04:28.860805006Z" level=info msg="Starting up"
	I0229 01:05:28.733768   13184 command_runner.go:130] > Feb 29 01:05:28 functional-583600 dockerd[1418]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0229 01:05:28.733768   13184 command_runner.go:130] > Feb 29 01:05:28 functional-583600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0229 01:05:28.733768   13184 command_runner.go:130] > Feb 29 01:05:28 functional-583600 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0229 01:05:28.733768   13184 command_runner.go:130] > Feb 29 01:05:28 functional-583600 systemd[1]: Failed to start Docker Application Container Engine.
	I0229 01:05:28.740450   13184 out.go:177] 
	W0229 01:05:28.741586   13184 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Feb 29 01:00:57 functional-583600 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 01:00:57 functional-583600 dockerd[656]: time="2024-02-29T01:00:57.270724284Z" level=info msg="Starting up"
	Feb 29 01:00:57 functional-583600 dockerd[656]: time="2024-02-29T01:00:57.271600588Z" level=info msg="containerd not running, starting managed containerd"
	Feb 29 01:00:57 functional-583600 dockerd[656]: time="2024-02-29T01:00:57.272822672Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=662
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.302335125Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.330394940Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.330541874Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.330606589Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.330622893Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.330711314Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.330734119Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.331015184Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.331111006Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.331130811Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.331141813Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.331235735Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.331672837Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.334892184Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.335058023Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.335218960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.335519930Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.335716776Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.335997141Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.336114268Z" level=info msg="metadata content store policy set" policy=shared
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.345670587Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.345723999Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.345743804Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.345761908Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.345778412Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.345976858Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.346887370Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347090617Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347203343Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347224248Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347240252Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347255955Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347271659Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347286962Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347303566Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347323871Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347337874Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347351277Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347376483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347394387Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347408691Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347453601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347469705Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347483808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347499312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347514115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347528619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347634443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347732966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347749870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347771375Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347796081Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347827288Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347935413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347956718Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.348086248Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.348121656Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.348138660Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.348150463Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.348327504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.348417125Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.348468137Z" level=info msg="NRI interface is disabled by configuration."
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.348924843Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.349067876Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.349126390Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.349152196Z" level=info msg="containerd successfully booted in 0.047945s"
	Feb 29 01:00:57 functional-583600 dockerd[656]: time="2024-02-29T01:00:57.381075108Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 29 01:00:57 functional-583600 dockerd[656]: time="2024-02-29T01:00:57.394008111Z" level=info msg="Loading containers: start."
	Feb 29 01:00:57 functional-583600 dockerd[656]: time="2024-02-29T01:00:57.646849219Z" level=info msg="Loading containers: done."
	Feb 29 01:00:57 functional-583600 dockerd[656]: time="2024-02-29T01:00:57.663366278Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Feb 29 01:00:57 functional-583600 dockerd[656]: time="2024-02-29T01:00:57.663775373Z" level=info msg="Daemon has completed initialization"
	Feb 29 01:00:57 functional-583600 dockerd[656]: time="2024-02-29T01:00:57.715844539Z" level=info msg="API listen on [::]:2376"
	Feb 29 01:00:57 functional-583600 dockerd[656]: time="2024-02-29T01:00:57.715977470Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 29 01:00:57 functional-583600 systemd[1]: Started Docker Application Container Engine.
	Feb 29 01:01:27 functional-583600 systemd[1]: Stopping Docker Application Container Engine...
	Feb 29 01:01:27 functional-583600 dockerd[656]: time="2024-02-29T01:01:27.082774000Z" level=info msg="Processing signal 'terminated'"
	Feb 29 01:01:27 functional-583600 dockerd[656]: time="2024-02-29T01:01:27.084438866Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Feb 29 01:01:27 functional-583600 dockerd[656]: time="2024-02-29T01:01:27.084969887Z" level=info msg="Daemon shutdown complete"
	Feb 29 01:01:27 functional-583600 dockerd[656]: time="2024-02-29T01:01:27.085649314Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Feb 29 01:01:27 functional-583600 dockerd[656]: time="2024-02-29T01:01:27.085809320Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Feb 29 01:01:28 functional-583600 systemd[1]: docker.service: Deactivated successfully.
	Feb 29 01:01:28 functional-583600 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 01:01:28 functional-583600 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 01:01:28 functional-583600 dockerd[999]: time="2024-02-29T01:01:28.160307636Z" level=info msg="Starting up"
	Feb 29 01:02:28 functional-583600 dockerd[999]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Feb 29 01:02:28 functional-583600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Feb 29 01:02:28 functional-583600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Feb 29 01:02:28 functional-583600 systemd[1]: Failed to start Docker Application Container Engine.
	Feb 29 01:02:28 functional-583600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
	Feb 29 01:02:28 functional-583600 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 01:02:28 functional-583600 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 01:02:28 functional-583600 dockerd[1010]: time="2024-02-29T01:02:28.574763226Z" level=info msg="Starting up"
	Feb 29 01:03:28 functional-583600 dockerd[1010]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Feb 29 01:03:28 functional-583600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Feb 29 01:03:28 functional-583600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Feb 29 01:03:28 functional-583600 systemd[1]: Failed to start Docker Application Container Engine.
	Feb 29 01:03:28 functional-583600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Feb 29 01:03:28 functional-583600 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 01:03:28 functional-583600 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 01:03:28 functional-583600 dockerd[1223]: time="2024-02-29T01:03:28.781864216Z" level=info msg="Starting up"
	Feb 29 01:03:48 functional-583600 dockerd[1223]: time="2024-02-29T01:03:48.815476282Z" level=info msg="Processing signal 'terminated'"
	Feb 29 01:04:28 functional-583600 dockerd[1223]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Feb 29 01:04:28 functional-583600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Feb 29 01:04:28 functional-583600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Feb 29 01:04:28 functional-583600 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 01:04:28 functional-583600 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 01:04:28 functional-583600 dockerd[1418]: time="2024-02-29T01:04:28.860805006Z" level=info msg="Starting up"
	Feb 29 01:05:28 functional-583600 dockerd[1418]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Feb 29 01:05:28 functional-583600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Feb 29 01:05:28 functional-583600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Feb 29 01:05:28 functional-583600 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Feb 29 01:00:57 functional-583600 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 01:00:57 functional-583600 dockerd[656]: time="2024-02-29T01:00:57.270724284Z" level=info msg="Starting up"
	Feb 29 01:00:57 functional-583600 dockerd[656]: time="2024-02-29T01:00:57.271600588Z" level=info msg="containerd not running, starting managed containerd"
	Feb 29 01:00:57 functional-583600 dockerd[656]: time="2024-02-29T01:00:57.272822672Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=662
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.302335125Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.330394940Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.330541874Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.330606589Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.330622893Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.330711314Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.330734119Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.331015184Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.331111006Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.331130811Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.331141813Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.331235735Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.331672837Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.334892184Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.335058023Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.335218960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.335519930Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.335716776Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.335997141Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.336114268Z" level=info msg="metadata content store policy set" policy=shared
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.345670587Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.345723999Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.345743804Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.345761908Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.345778412Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.345976858Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.346887370Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347090617Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347203343Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347224248Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347240252Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347255955Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347271659Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347286962Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347303566Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347323871Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347337874Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347351277Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347376483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347394387Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347408691Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347453601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347469705Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347483808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347499312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347514115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347528619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347634443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347732966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347749870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347771375Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347796081Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347827288Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347935413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347956718Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.348086248Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.348121656Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.348138660Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.348150463Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.348327504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.348417125Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.348468137Z" level=info msg="NRI interface is disabled by configuration."
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.348924843Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.349067876Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.349126390Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.349152196Z" level=info msg="containerd successfully booted in 0.047945s"
	Feb 29 01:00:57 functional-583600 dockerd[656]: time="2024-02-29T01:00:57.381075108Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 29 01:00:57 functional-583600 dockerd[656]: time="2024-02-29T01:00:57.394008111Z" level=info msg="Loading containers: start."
	Feb 29 01:00:57 functional-583600 dockerd[656]: time="2024-02-29T01:00:57.646849219Z" level=info msg="Loading containers: done."
	Feb 29 01:00:57 functional-583600 dockerd[656]: time="2024-02-29T01:00:57.663366278Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Feb 29 01:00:57 functional-583600 dockerd[656]: time="2024-02-29T01:00:57.663775373Z" level=info msg="Daemon has completed initialization"
	Feb 29 01:00:57 functional-583600 dockerd[656]: time="2024-02-29T01:00:57.715844539Z" level=info msg="API listen on [::]:2376"
	Feb 29 01:00:57 functional-583600 dockerd[656]: time="2024-02-29T01:00:57.715977470Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 29 01:00:57 functional-583600 systemd[1]: Started Docker Application Container Engine.
	Feb 29 01:01:27 functional-583600 systemd[1]: Stopping Docker Application Container Engine...
	Feb 29 01:01:27 functional-583600 dockerd[656]: time="2024-02-29T01:01:27.082774000Z" level=info msg="Processing signal 'terminated'"
	Feb 29 01:01:27 functional-583600 dockerd[656]: time="2024-02-29T01:01:27.084438866Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Feb 29 01:01:27 functional-583600 dockerd[656]: time="2024-02-29T01:01:27.084969887Z" level=info msg="Daemon shutdown complete"
	Feb 29 01:01:27 functional-583600 dockerd[656]: time="2024-02-29T01:01:27.085649314Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Feb 29 01:01:27 functional-583600 dockerd[656]: time="2024-02-29T01:01:27.085809320Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Feb 29 01:01:28 functional-583600 systemd[1]: docker.service: Deactivated successfully.
	Feb 29 01:01:28 functional-583600 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 01:01:28 functional-583600 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 01:01:28 functional-583600 dockerd[999]: time="2024-02-29T01:01:28.160307636Z" level=info msg="Starting up"
	Feb 29 01:02:28 functional-583600 dockerd[999]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Feb 29 01:02:28 functional-583600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Feb 29 01:02:28 functional-583600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Feb 29 01:02:28 functional-583600 systemd[1]: Failed to start Docker Application Container Engine.
	Feb 29 01:02:28 functional-583600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
	Feb 29 01:02:28 functional-583600 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 01:02:28 functional-583600 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 01:02:28 functional-583600 dockerd[1010]: time="2024-02-29T01:02:28.574763226Z" level=info msg="Starting up"
	Feb 29 01:03:28 functional-583600 dockerd[1010]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Feb 29 01:03:28 functional-583600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Feb 29 01:03:28 functional-583600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Feb 29 01:03:28 functional-583600 systemd[1]: Failed to start Docker Application Container Engine.
	Feb 29 01:03:28 functional-583600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Feb 29 01:03:28 functional-583600 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 01:03:28 functional-583600 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 01:03:28 functional-583600 dockerd[1223]: time="2024-02-29T01:03:28.781864216Z" level=info msg="Starting up"
	Feb 29 01:03:48 functional-583600 dockerd[1223]: time="2024-02-29T01:03:48.815476282Z" level=info msg="Processing signal 'terminated'"
	Feb 29 01:04:28 functional-583600 dockerd[1223]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Feb 29 01:04:28 functional-583600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Feb 29 01:04:28 functional-583600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Feb 29 01:04:28 functional-583600 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 01:04:28 functional-583600 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 01:04:28 functional-583600 dockerd[1418]: time="2024-02-29T01:04:28.860805006Z" level=info msg="Starting up"
	Feb 29 01:05:28 functional-583600 dockerd[1418]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Feb 29 01:05:28 functional-583600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Feb 29 01:05:28 functional-583600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Feb 29 01:05:28 functional-583600 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0229 01:05:28.741586   13184 out.go:239] * 
	* 
	W0229 01:05:28.743422   13184 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 01:05:28.743969   13184 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:657: failed to soft start minikube. args "out/minikube-windows-amd64.exe start -p functional-583600 --alsologtostderr -v=8": exit status 90
functional_test.go:659: soft start took 2m49.4923968s for "functional-583600" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-583600 -n functional-583600
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-583600 -n functional-583600: exit status 6 (11.3480475s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 01:05:29.042901    9768 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0229 01:05:40.213691    9768 status.go:415] kubeconfig endpoint: extract IP: "functional-583600" does not appear in C:\Users\jenkins.minikube5\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "functional-583600" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestFunctional/serial/SoftStart (180.84s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (11.21s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
functional_test.go:677: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (113.843ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:679: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:683: expected current-context = "functional-583600", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-583600 -n functional-583600
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-583600 -n functional-583600: exit status 6 (11.0900071s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 01:05:40.509923    7264 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0229 01:05:51.436784    7264 status.go:415] kubeconfig endpoint: extract IP: "functional-583600" does not appear in C:\Users\jenkins.minikube5\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "functional-583600" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestFunctional/serial/KubeContext (11.21s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (11.36s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-583600 get po -A
functional_test.go:692: (dbg) Non-zero exit: kubectl --context functional-583600 get po -A: exit status 1 (107.9347ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-583600

                                                
                                                
** /stderr **
functional_test.go:694: failed to get kubectl pods: args "kubectl --context functional-583600 get po -A" : exit status 1
functional_test.go:698: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-583600\n"*: args "kubectl --context functional-583600 get po -A"
functional_test.go:701: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-583600 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-583600 -n functional-583600
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-583600 -n functional-583600: exit status 6 (11.239161s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 01:05:51.714467    1672 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0229 01:06:02.794770    1672 status.go:415] kubeconfig endpoint: extract IP: "functional-583600" does not appear in C:\Users\jenkins.minikube5\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "functional-583600" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestFunctional/serial/KubectlGetPods (11.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (8.83s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 ssh sudo crictl images
functional_test.go:1120: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-583600 ssh sudo crictl images: exit status 1 (8.8236721s)

                                                
                                                
-- stdout --
	FATA[0000] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/cri-dockerd.sock": rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused" 

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 01:12:31.511712    4088 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1122: failed to get images by "out/minikube-windows-amd64.exe -p functional-583600 ssh sudo crictl images" ssh exit status 1
functional_test.go:1126: expected sha for pause:3.3 "0184c1613d929" to be in the output but got *
-- stdout --
	FATA[0000] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/cri-dockerd.sock": rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused" 

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 01:12:31.511712    4088 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr ***
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (8.83s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (179.9s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-583600 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 1 (50.425148s)

                                                
                                                
-- stdout --
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 01:12:40.338559    6620 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1146: failed to manually delete image "out/minikube-windows-amd64.exe -p functional-583600 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 1
functional_test.go:1149: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-583600 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (8.8637661s)

                                                
                                                
-- stdout --
	FATA[0000] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/cri-dockerd.sock": rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused" 

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 01:13:30.772739     184 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 cache reload
E0229 01:14:28.563588    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
functional_test.go:1154: (dbg) Done: out/minikube-windows-amd64.exe -p functional-583600 cache reload: (1m51.6838418s)
functional_test.go:1159: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-583600 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (8.9254439s)

                                                
                                                
-- stdout --
	FATA[0000] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/cri-dockerd.sock": rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused" 

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 01:15:31.325972    5628 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1161: expected "out/minikube-windows-amd64.exe -p functional-583600 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 1
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (179.90s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (11.56s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 kubectl -- --context functional-583600 get pods
functional_test.go:712: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-583600 kubectl -- --context functional-583600 get pods: exit status 1 (354.7548ms)

                                                
                                                
** stderr ** 
	W0229 01:15:52.028842   13396 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error in configuration: 
	* context was not found for specified context: functional-583600
	* no server found for cluster "functional-583600"

                                                
                                                
** /stderr **
functional_test.go:715: failed to get pods. args "out/minikube-windows-amd64.exe -p functional-583600 kubectl -- --context functional-583600 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-583600 -n functional-583600
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-583600 -n functional-583600: exit status 6 (11.2069556s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 01:15:52.383528    9624 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0229 01:16:03.423489    9624 status.go:415] kubeconfig endpoint: extract IP: "functional-583600" does not appear in C:\Users\jenkins.minikube5\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "functional-583600" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (11.56s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (11.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:731: link out/minikube-windows-amd64.exe out\kubectl.exe: Cannot create a file when that file already exists.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-583600 -n functional-583600
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-583600 -n functional-583600: exit status 6 (11.1577584s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 01:16:03.605620    1532 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0229 01:16:14.600973    1532 status.go:415] kubeconfig endpoint: extract IP: "functional-583600" does not appear in C:\Users\jenkins.minikube5\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "functional-583600" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (11.16s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (148.62s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-583600 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-583600 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 90 (2m17.1921845s)

                                                
                                                
-- stdout --
	* [functional-583600] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18063
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting control plane node functional-583600 in cluster functional-583600
	* Updating the running hyperv "functional-583600" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 01:16:14.750960    8552 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! This VM is having trouble accessing https://registry.k8s.io
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Feb 29 01:00:57 functional-583600 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 01:00:57 functional-583600 dockerd[656]: time="2024-02-29T01:00:57.270724284Z" level=info msg="Starting up"
	Feb 29 01:00:57 functional-583600 dockerd[656]: time="2024-02-29T01:00:57.271600588Z" level=info msg="containerd not running, starting managed containerd"
	Feb 29 01:00:57 functional-583600 dockerd[656]: time="2024-02-29T01:00:57.272822672Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=662
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.302335125Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.330394940Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.330541874Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.330606589Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.330622893Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.330711314Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.330734119Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.331015184Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.331111006Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.331130811Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.331141813Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.331235735Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.331672837Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.334892184Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.335058023Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.335218960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.335519930Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.335716776Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.335997141Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.336114268Z" level=info msg="metadata content store policy set" policy=shared
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.345670587Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.345723999Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.345743804Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.345761908Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.345778412Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.345976858Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.346887370Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347090617Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347203343Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347224248Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347240252Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347255955Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347271659Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347286962Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347303566Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347323871Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347337874Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347351277Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347376483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347394387Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347408691Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347453601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347469705Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347483808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347499312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347514115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347528619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347634443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347732966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347749870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347771375Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347796081Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347827288Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347935413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.347956718Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.348086248Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.348121656Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.348138660Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.348150463Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.348327504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.348417125Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.348468137Z" level=info msg="NRI interface is disabled by configuration."
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.348924843Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.349067876Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.349126390Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Feb 29 01:00:57 functional-583600 dockerd[662]: time="2024-02-29T01:00:57.349152196Z" level=info msg="containerd successfully booted in 0.047945s"
	Feb 29 01:00:57 functional-583600 dockerd[656]: time="2024-02-29T01:00:57.381075108Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 29 01:00:57 functional-583600 dockerd[656]: time="2024-02-29T01:00:57.394008111Z" level=info msg="Loading containers: start."
	Feb 29 01:00:57 functional-583600 dockerd[656]: time="2024-02-29T01:00:57.646849219Z" level=info msg="Loading containers: done."
	Feb 29 01:00:57 functional-583600 dockerd[656]: time="2024-02-29T01:00:57.663366278Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Feb 29 01:00:57 functional-583600 dockerd[656]: time="2024-02-29T01:00:57.663775373Z" level=info msg="Daemon has completed initialization"
	Feb 29 01:00:57 functional-583600 dockerd[656]: time="2024-02-29T01:00:57.715844539Z" level=info msg="API listen on [::]:2376"
	Feb 29 01:00:57 functional-583600 dockerd[656]: time="2024-02-29T01:00:57.715977470Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 29 01:00:57 functional-583600 systemd[1]: Started Docker Application Container Engine.
	Feb 29 01:01:27 functional-583600 systemd[1]: Stopping Docker Application Container Engine...
	Feb 29 01:01:27 functional-583600 dockerd[656]: time="2024-02-29T01:01:27.082774000Z" level=info msg="Processing signal 'terminated'"
	Feb 29 01:01:27 functional-583600 dockerd[656]: time="2024-02-29T01:01:27.084438866Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Feb 29 01:01:27 functional-583600 dockerd[656]: time="2024-02-29T01:01:27.084969887Z" level=info msg="Daemon shutdown complete"
	Feb 29 01:01:27 functional-583600 dockerd[656]: time="2024-02-29T01:01:27.085649314Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Feb 29 01:01:27 functional-583600 dockerd[656]: time="2024-02-29T01:01:27.085809320Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Feb 29 01:01:28 functional-583600 systemd[1]: docker.service: Deactivated successfully.
	Feb 29 01:01:28 functional-583600 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 01:01:28 functional-583600 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 01:01:28 functional-583600 dockerd[999]: time="2024-02-29T01:01:28.160307636Z" level=info msg="Starting up"
	Feb 29 01:02:28 functional-583600 dockerd[999]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Feb 29 01:02:28 functional-583600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Feb 29 01:02:28 functional-583600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Feb 29 01:02:28 functional-583600 systemd[1]: Failed to start Docker Application Container Engine.
	Feb 29 01:02:28 functional-583600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
	Feb 29 01:02:28 functional-583600 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 01:02:28 functional-583600 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 01:02:28 functional-583600 dockerd[1010]: time="2024-02-29T01:02:28.574763226Z" level=info msg="Starting up"
	Feb 29 01:03:28 functional-583600 dockerd[1010]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Feb 29 01:03:28 functional-583600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Feb 29 01:03:28 functional-583600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Feb 29 01:03:28 functional-583600 systemd[1]: Failed to start Docker Application Container Engine.
	Feb 29 01:03:28 functional-583600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Feb 29 01:03:28 functional-583600 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 01:03:28 functional-583600 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 01:03:28 functional-583600 dockerd[1223]: time="2024-02-29T01:03:28.781864216Z" level=info msg="Starting up"
	Feb 29 01:03:48 functional-583600 dockerd[1223]: time="2024-02-29T01:03:48.815476282Z" level=info msg="Processing signal 'terminated'"
	Feb 29 01:04:28 functional-583600 dockerd[1223]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Feb 29 01:04:28 functional-583600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Feb 29 01:04:28 functional-583600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Feb 29 01:04:28 functional-583600 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 01:04:28 functional-583600 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 01:04:28 functional-583600 dockerd[1418]: time="2024-02-29T01:04:28.860805006Z" level=info msg="Starting up"
	Feb 29 01:05:28 functional-583600 dockerd[1418]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Feb 29 01:05:28 functional-583600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Feb 29 01:05:28 functional-583600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Feb 29 01:05:28 functional-583600 systemd[1]: Failed to start Docker Application Container Engine.
	Feb 29 01:05:29 functional-583600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
	Feb 29 01:05:29 functional-583600 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 01:05:29 functional-583600 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 01:05:29 functional-583600 dockerd[1429]: time="2024-02-29T01:05:29.086941451Z" level=info msg="Starting up"
	Feb 29 01:06:29 functional-583600 dockerd[1429]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Feb 29 01:06:29 functional-583600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Feb 29 01:06:29 functional-583600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Feb 29 01:06:29 functional-583600 systemd[1]: Failed to start Docker Application Container Engine.
	Feb 29 01:06:29 functional-583600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Feb 29 01:06:29 functional-583600 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 01:06:29 functional-583600 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 01:06:29 functional-583600 dockerd[1560]: time="2024-02-29T01:06:29.319034877Z" level=info msg="Starting up"
	Feb 29 01:07:29 functional-583600 dockerd[1560]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Feb 29 01:07:29 functional-583600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Feb 29 01:07:29 functional-583600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Feb 29 01:07:29 functional-583600 systemd[1]: Failed to start Docker Application Container Engine.
	Feb 29 01:07:29 functional-583600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
	Feb 29 01:07:29 functional-583600 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 01:07:29 functional-583600 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 01:07:29 functional-583600 dockerd[1583]: time="2024-02-29T01:07:29.572209290Z" level=info msg="Starting up"
	Feb 29 01:08:29 functional-583600 dockerd[1583]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Feb 29 01:08:29 functional-583600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Feb 29 01:08:29 functional-583600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Feb 29 01:08:29 functional-583600 systemd[1]: Failed to start Docker Application Container Engine.
	Feb 29 01:08:29 functional-583600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 4.
	Feb 29 01:08:29 functional-583600 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 01:08:29 functional-583600 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 01:08:29 functional-583600 dockerd[1615]: time="2024-02-29T01:08:29.825776289Z" level=info msg="Starting up"
	Feb 29 01:09:29 functional-583600 dockerd[1615]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Feb 29 01:09:29 functional-583600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Feb 29 01:09:29 functional-583600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Feb 29 01:09:29 functional-583600 systemd[1]: Failed to start Docker Application Container Engine.
	Feb 29 01:09:30 functional-583600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 5.
	Feb 29 01:09:30 functional-583600 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 01:09:30 functional-583600 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 01:09:30 functional-583600 dockerd[1636]: time="2024-02-29T01:09:30.088340555Z" level=info msg="Starting up"
	Feb 29 01:10:30 functional-583600 dockerd[1636]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Feb 29 01:10:30 functional-583600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Feb 29 01:10:30 functional-583600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Feb 29 01:10:30 functional-583600 systemd[1]: Failed to start Docker Application Container Engine.
	Feb 29 01:10:30 functional-583600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 6.
	Feb 29 01:10:30 functional-583600 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 01:10:30 functional-583600 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 01:10:30 functional-583600 dockerd[1668]: time="2024-02-29T01:10:30.333446039Z" level=info msg="Starting up"
	Feb 29 01:11:30 functional-583600 dockerd[1668]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Feb 29 01:11:30 functional-583600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Feb 29 01:11:30 functional-583600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Feb 29 01:11:30 functional-583600 systemd[1]: Failed to start Docker Application Container Engine.
	Feb 29 01:11:30 functional-583600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 7.
	Feb 29 01:11:30 functional-583600 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 01:11:30 functional-583600 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 01:11:30 functional-583600 dockerd[1690]: time="2024-02-29T01:11:30.583005636Z" level=info msg="Starting up"
	Feb 29 01:12:30 functional-583600 dockerd[1690]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Feb 29 01:12:30 functional-583600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Feb 29 01:12:30 functional-583600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Feb 29 01:12:30 functional-583600 systemd[1]: Failed to start Docker Application Container Engine.
	Feb 29 01:12:30 functional-583600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 8.
	Feb 29 01:12:30 functional-583600 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 01:12:30 functional-583600 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 01:12:30 functional-583600 dockerd[1731]: time="2024-02-29T01:12:30.789136684Z" level=info msg="Starting up"
	Feb 29 01:13:30 functional-583600 dockerd[1731]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Feb 29 01:13:30 functional-583600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Feb 29 01:13:30 functional-583600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Feb 29 01:13:30 functional-583600 systemd[1]: Failed to start Docker Application Container Engine.
	Feb 29 01:13:31 functional-583600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 9.
	Feb 29 01:13:31 functional-583600 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 01:13:31 functional-583600 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 01:13:31 functional-583600 dockerd[1783]: time="2024-02-29T01:13:31.081605158Z" level=info msg="Starting up"
	Feb 29 01:14:31 functional-583600 dockerd[1783]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Feb 29 01:14:31 functional-583600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Feb 29 01:14:31 functional-583600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Feb 29 01:14:31 functional-583600 systemd[1]: Failed to start Docker Application Container Engine.
	Feb 29 01:14:31 functional-583600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 10.
	Feb 29 01:14:31 functional-583600 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 01:14:31 functional-583600 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 01:14:31 functional-583600 dockerd[1835]: time="2024-02-29T01:14:31.335462006Z" level=info msg="Starting up"
	Feb 29 01:15:31 functional-583600 dockerd[1835]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Feb 29 01:15:31 functional-583600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Feb 29 01:15:31 functional-583600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Feb 29 01:15:31 functional-583600 systemd[1]: Failed to start Docker Application Container Engine.
	Feb 29 01:15:31 functional-583600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 11.
	Feb 29 01:15:31 functional-583600 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 01:15:31 functional-583600 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 01:15:31 functional-583600 dockerd[1871]: time="2024-02-29T01:15:31.577918432Z" level=info msg="Starting up"
	Feb 29 01:16:31 functional-583600 dockerd[1871]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Feb 29 01:16:31 functional-583600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Feb 29 01:16:31 functional-583600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Feb 29 01:16:31 functional-583600 systemd[1]: Failed to start Docker Application Container Engine.
	Feb 29 01:16:31 functional-583600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 12.
	Feb 29 01:16:31 functional-583600 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 01:16:31 functional-583600 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 01:16:31 functional-583600 dockerd[2043]: time="2024-02-29T01:16:31.832770845Z" level=info msg="Starting up"
	Feb 29 01:17:25 functional-583600 dockerd[2043]: time="2024-02-29T01:17:25.228618402Z" level=info msg="Processing signal 'terminated'"
	Feb 29 01:17:31 functional-583600 dockerd[2043]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Feb 29 01:17:31 functional-583600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Feb 29 01:17:31 functional-583600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Feb 29 01:17:31 functional-583600 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 01:17:31 functional-583600 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 01:17:31 functional-583600 dockerd[2367]: time="2024-02-29T01:17:31.907687034Z" level=info msg="Starting up"
	Feb 29 01:18:31 functional-583600 dockerd[2367]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Feb 29 01:18:31 functional-583600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Feb 29 01:18:31 functional-583600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Feb 29 01:18:31 functional-583600 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-windows-amd64.exe start -p functional-583600 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 90
functional_test.go:757: restart took 2m17.1983788s for "functional-583600" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-583600 -n functional-583600
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-583600 -n functional-583600: exit status 6 (11.416294s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 01:18:31.966814    9024 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0229 01:18:43.210191    9024 status.go:415] kubeconfig endpoint: extract IP: "functional-583600" does not appear in C:\Users\jenkins.minikube5\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "functional-583600" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestFunctional/serial/ExtraConfig (148.62s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (11.32s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-583600 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-583600 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (117.1033ms)

                                                
                                                
** stderr ** 
	error: context "functional-583600" does not exist

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-583600 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-583600 -n functional-583600
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-583600 -n functional-583600: exit status 6 (11.1854065s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 01:18:43.507693    8316 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0229 01:18:54.528192    8316 status.go:415] kubeconfig endpoint: extract IP: "functional-583600" does not appear in C:\Users\jenkins.minikube5\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "functional-583600" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestFunctional/serial/ComponentHealth (11.32s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-583600 apply -f testdata\invalidsvc.yaml
functional_test.go:2317: (dbg) Non-zero exit: kubectl --context functional-583600 apply -f testdata\invalidsvc.yaml: exit status 1 (122.7578ms)

                                                
                                                
** stderr ** 
	error: context "functional-583600" does not exist

                                                
                                                
** /stderr **
functional_test.go:2319: kubectl --context functional-583600 apply -f testdata\invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-583600 config unset cpus" to be -""- but got *"W0229 01:22:33.410337   12872 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube5\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-583600 config get cpus: exit status 14 (258.1625ms)

                                                
                                                
** stderr ** 
	W0229 01:22:33.684767    7320 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-583600 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0229 01:22:33.684767    7320 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube5\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 config set cpus 2
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-583600 config set cpus 2" to be -"! These changes will take effect upon a minikube delete and then a minikube start"- but got *"W0229 01:22:33.951909    3792 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube5\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n! These changes will take effect upon a minikube delete and then a minikube start"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 config get cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-583600 config get cpus" to be -""- but got *"W0229 01:22:34.343468     788 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube5\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-583600 config unset cpus" to be -""- but got *"W0229 01:22:34.633382   13496 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube5\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-583600 config get cpus: exit status 14 (266.5841ms)

                                                
                                                
** stderr ** 
	W0229 01:22:34.900180    6204 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-583600 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0229 01:22:34.900180    6204 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube5\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
--- FAIL: TestFunctional/parallel/ConfigCmd (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (51.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 status
functional_test.go:850: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-583600 status: exit status 6 (12.7563957s)

                                                
                                                
-- stdout --
	functional-583600
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 01:22:57.729446   12648 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0229 01:23:10.321930   12648 status.go:415] kubeconfig endpoint: extract IP: "functional-583600" does not appear in C:\Users\jenkins.minikube5\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
functional_test.go:852: failed to run minikube status. args "out/minikube-windows-amd64.exe -p functional-583600 status" : exit status 6
functional_test.go:856: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-583600 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 6 (12.7187718s)

                                                
                                                
-- stdout --
	host:Running,kublet:Stopped,apiserver:Stopped,kubeconfig:Misconfigured
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 01:23:10.470739   13660 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0229 01:23:23.021800   13660 status.go:415] kubeconfig endpoint: extract IP: "functional-583600" does not appear in C:\Users\jenkins.minikube5\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
functional_test.go:858: failed to run minikube status with custom format: args "out/minikube-windows-amd64.exe -p functional-583600 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 6
functional_test.go:868: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 status -o json
functional_test.go:868: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-583600 status -o json: exit status 6 (12.8936423s)

                                                
                                                
-- stdout --
	{"Name":"functional-583600","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Misconfigured","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 01:23:23.218145   12952 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0229 01:23:35.918507   12952 status.go:415] kubeconfig endpoint: extract IP: "functional-583600" does not appear in C:\Users\jenkins.minikube5\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
functional_test.go:870: failed to run minikube status with json output. args "out/minikube-windows-amd64.exe -p functional-583600 status -o json" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-583600 -n functional-583600
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-583600 -n functional-583600: exit status 6 (13.5389649s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 01:23:36.103117    2620 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0229 01:23:49.441093    2620 status.go:415] kubeconfig endpoint: extract IP: "functional-583600" does not appear in C:\Users\jenkins.minikube5\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "functional-583600" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestFunctional/parallel/StatusCmd (51.91s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (13.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-583600 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1625: (dbg) Non-zero exit: kubectl --context functional-583600 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8: exit status 1 (145.0865ms)

                                                
                                                
** stderr ** 
	error: context "functional-583600" does not exist

                                                
                                                
** /stderr **
functional_test.go:1629: failed to create hello-node deployment with this command "kubectl --context functional-583600 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8": exit status 1.
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-583600 describe po hello-node-connect
functional_test.go:1598: (dbg) Non-zero exit: kubectl --context functional-583600 describe po hello-node-connect: exit status 1 (158.2399ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-583600

                                                
                                                
** /stderr **
functional_test.go:1600: "kubectl --context functional-583600 describe po hello-node-connect" failed: exit status 1
functional_test.go:1602: hello-node pod describe:
functional_test.go:1604: (dbg) Run:  kubectl --context functional-583600 logs -l app=hello-node-connect
functional_test.go:1604: (dbg) Non-zero exit: kubectl --context functional-583600 logs -l app=hello-node-connect: exit status 1 (118.8871ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-583600

                                                
                                                
** /stderr **
functional_test.go:1606: "kubectl --context functional-583600 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1608: hello-node logs:
functional_test.go:1610: (dbg) Run:  kubectl --context functional-583600 describe svc hello-node-connect
functional_test.go:1610: (dbg) Non-zero exit: kubectl --context functional-583600 describe svc hello-node-connect: exit status 1 (132.7581ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-583600

                                                
                                                
** /stderr **
functional_test.go:1612: "kubectl --context functional-583600 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1614: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-583600 -n functional-583600
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-583600 -n functional-583600: exit status 6 (12.9257857s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 01:22:56.750446    6572 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0229 01:23:09.491220    6572 status.go:415] kubeconfig endpoint: extract IP: "functional-583600" does not appear in C:\Users\jenkins.minikube5\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "functional-583600" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (13.52s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (13.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-583600" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-583600 -n functional-583600
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-583600 -n functional-583600: exit status 6 (13.3136996s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 01:22:44.417434   14020 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0229 01:22:57.537204   14020 status.go:415] kubeconfig endpoint: extract IP: "functional-583600" does not appear in C:\Users\jenkins.minikube5\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "functional-583600" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (13.32s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (11.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-583600 replace --force -f testdata\mysql.yaml
functional_test.go:1789: (dbg) Non-zero exit: kubectl --context functional-583600 replace --force -f testdata\mysql.yaml: exit status 1 (137.1158ms)

                                                
                                                
** stderr ** 
	error: context "functional-583600" does not exist

                                                
                                                
** /stderr **
functional_test.go:1791: failed to kubectl replace mysql: args "kubectl --context functional-583600 replace --force -f testdata\\mysql.yaml" failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-583600 -n functional-583600
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-583600 -n functional-583600: exit status 6 (11.4591572s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 01:24:11.419759    3936 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0229 01:24:22.723446    3936 status.go:415] kubeconfig endpoint: extract IP: "functional-583600" does not appear in C:\Users\jenkins.minikube5\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "functional-583600" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestFunctional/parallel/MySQL (11.61s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (248.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/3312.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 ssh "sudo cat /etc/ssl/certs/3312.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-583600 ssh "sudo cat /etc/ssl/certs/3312.pem": exit status 1 (11.0681791s)

                                                
                                                
-- stdout --
	cat: /etc/ssl/certs/3312.pem: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 01:23:37.778683    9476 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/3312.pem" inside minikube. args "out/minikube-windows-amd64.exe -p functional-583600 ssh \"sudo cat /etc/ssl/certs/3312.pem\"": exit status 1
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/3312.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	cat: /etc/ssl/certs/3312.pem: No such file or directory
	"""
)
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/3312.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 ssh "sudo cat /usr/share/ca-certificates/3312.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-583600 ssh "sudo cat /usr/share/ca-certificates/3312.pem": exit status 1 (9.8567985s)

                                                
                                                
-- stdout --
	cat: /usr/share/ca-certificates/3312.pem: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 01:23:48.869365   13628 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1971: failed to check existence of "/usr/share/ca-certificates/3312.pem" inside minikube. args "out/minikube-windows-amd64.exe -p functional-583600 ssh \"sudo cat /usr/share/ca-certificates/3312.pem\"": exit status 1
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/3312.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	cat: /usr/share/ca-certificates/3312.pem: No such file or directory
	"""
)
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-583600 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 1 (9.3216939s)

                                                
                                                
-- stdout --
	cat: /etc/ssl/certs/51391683.0: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 01:23:58.701315   14268 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-windows-amd64.exe -p functional-583600 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 1
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	cat: /etc/ssl/certs/51391683.0: No such file or directory
	"""
)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/33122.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 ssh "sudo cat /etc/ssl/certs/33122.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-583600 ssh "sudo cat /etc/ssl/certs/33122.pem": (8.9489899s)
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/33122.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 ssh "sudo cat /usr/share/ca-certificates/33122.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-583600 ssh "sudo cat /usr/share/ca-certificates/33122.pem": exit status 1 (8.9466453s)

                                                
                                                
-- stdout --
	cat: /usr/share/ca-certificates/33122.pem: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 01:24:16.955218    8476 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1998: failed to check existence of "/usr/share/ca-certificates/33122.pem" inside minikube. args "out/minikube-windows-amd64.exe -p functional-583600 ssh \"sudo cat /usr/share/ca-certificates/33122.pem\"": exit status 1
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/33122.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	cat: /usr/share/ca-certificates/33122.pem: No such file or directory
	"""
)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
E0229 01:24:28.593661    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-583600 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 1 (9.1867881s)

                                                
                                                
-- stdout --
	cat: /etc/ssl/certs/3ec20f2e.0: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 01:24:25.909553    2992 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-windows-amd64.exe -p functional-583600 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 1
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	cat: /etc/ssl/certs/3ec20f2e.0: No such file or directory
	"""
)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-583600 -n functional-583600
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-583600 -n functional-583600: exit status 2 (11.496736s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 01:24:35.099836    4320 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/parallel/CertSync FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/CertSync]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-583600 logs -n 25: (2m47.8023929s)
helpers_test.go:252: TestFunctional/parallel/CertSync logs: 
-- stdout --
	
	==> Audit <==
	|----------------|----------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	|    Command     |                           Args                           |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|----------------|----------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| addons         | functional-583600 addons list                            | functional-583600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 01:22 UTC | 29 Feb 24 01:22 UTC |
	| addons         | functional-583600 addons list                            | functional-583600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 01:22 UTC | 29 Feb 24 01:22 UTC |
	|                | -o json                                                  |                   |                   |         |                     |                     |
	| service        | functional-583600                                        | functional-583600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 01:22 UTC |                     |
	|                | service hello-node --url                                 |                   |                   |         |                     |                     |
	|                | --format={{.IP}}                                         |                   |                   |         |                     |                     |
	| ssh            | functional-583600 ssh -n                                 | functional-583600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 01:23 UTC | 29 Feb 24 01:23 UTC |
	|                | functional-583600 sudo cat                               |                   |                   |         |                     |                     |
	|                | /home/docker/cp-test.txt                                 |                   |                   |         |                     |                     |
	| service        | functional-583600 service                                | functional-583600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 01:23 UTC |                     |
	|                | hello-node --url                                         |                   |                   |         |                     |                     |
	| cp             | functional-583600 cp                                     | functional-583600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 01:23 UTC | 29 Feb 24 01:23 UTC |
	|                | testdata\cp-test.txt                                     |                   |                   |         |                     |                     |
	|                | /tmp/does/not/exist/cp-test.txt                          |                   |                   |         |                     |                     |
	| ssh            | functional-583600 ssh -n                                 | functional-583600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 01:23 UTC | 29 Feb 24 01:23 UTC |
	|                | functional-583600 sudo cat                               |                   |                   |         |                     |                     |
	|                | /tmp/does/not/exist/cp-test.txt                          |                   |                   |         |                     |                     |
	| ssh            | functional-583600 ssh sudo                               | functional-583600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 01:23 UTC |                     |
	|                | systemctl is-active crio                                 |                   |                   |         |                     |                     |
	| license        |                                                          | minikube          | minikube5\jenkins | v1.32.0 | 29 Feb 24 01:23 UTC | 29 Feb 24 01:23 UTC |
	| start          | -p functional-583600                                     | functional-583600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 01:23 UTC |                     |
	|                | --dry-run --memory                                       |                   |                   |         |                     |                     |
	|                | 250MB --alsologtostderr                                  |                   |                   |         |                     |                     |
	|                | --driver=hyperv                                          |                   |                   |         |                     |                     |
	| start          | -p functional-583600                                     | functional-583600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 01:23 UTC |                     |
	|                | --dry-run --memory                                       |                   |                   |         |                     |                     |
	|                | 250MB --alsologtostderr                                  |                   |                   |         |                     |                     |
	|                | --driver=hyperv                                          |                   |                   |         |                     |                     |
	| ssh            | functional-583600 ssh sudo cat                           | functional-583600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 01:23 UTC |                     |
	|                | /etc/ssl/certs/3312.pem                                  |                   |                   |         |                     |                     |
	| dashboard      | --url --port 36195                                       | functional-583600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 01:23 UTC |                     |
	|                | -p functional-583600                                     |                   |                   |         |                     |                     |
	|                | --alsologtostderr -v=1                                   |                   |                   |         |                     |                     |
	| docker-env     | functional-583600 docker-env                             | functional-583600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 01:23 UTC |                     |
	| ssh            | functional-583600 ssh sudo cat                           | functional-583600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 01:23 UTC |                     |
	|                | /usr/share/ca-certificates/3312.pem                      |                   |                   |         |                     |                     |
	| ssh            | functional-583600 ssh sudo cat                           | functional-583600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 01:23 UTC | 29 Feb 24 01:23 UTC |
	|                | /etc/test/nested/copy/3312/hosts                         |                   |                   |         |                     |                     |
	| image          | functional-583600 image load --daemon                    | functional-583600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 01:23 UTC | 29 Feb 24 01:24 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-583600 |                   |                   |         |                     |                     |
	|                | --alsologtostderr                                        |                   |                   |         |                     |                     |
	| ssh            | functional-583600 ssh sudo cat                           | functional-583600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 01:23 UTC |                     |
	|                | /etc/ssl/certs/51391683.0                                |                   |                   |         |                     |                     |
	| ssh            | functional-583600 ssh sudo cat                           | functional-583600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 01:24 UTC | 29 Feb 24 01:24 UTC |
	|                | /etc/ssl/certs/33122.pem                                 |                   |                   |         |                     |                     |
	| ssh            | functional-583600 ssh sudo cat                           | functional-583600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 01:24 UTC |                     |
	|                | /usr/share/ca-certificates/33122.pem                     |                   |                   |         |                     |                     |
	| ssh            | functional-583600 ssh sudo cat                           | functional-583600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 01:24 UTC |                     |
	|                | /etc/ssl/certs/3ec20f2e.0                                |                   |                   |         |                     |                     |
	| update-context | functional-583600                                        | functional-583600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 01:24 UTC | 29 Feb 24 01:24 UTC |
	|                | update-context                                           |                   |                   |         |                     |                     |
	|                | --alsologtostderr -v=2                                   |                   |                   |         |                     |                     |
	| update-context | functional-583600                                        | functional-583600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 01:24 UTC | 29 Feb 24 01:24 UTC |
	|                | update-context                                           |                   |                   |         |                     |                     |
	|                | --alsologtostderr -v=2                                   |                   |                   |         |                     |                     |
	| image          | functional-583600 image ls                               | functional-583600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 01:24 UTC |                     |
	| update-context | functional-583600                                        | functional-583600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 01:24 UTC | 29 Feb 24 01:24 UTC |
	|                | update-context                                           |                   |                   |         |                     |                     |
	|                | --alsologtostderr -v=2                                   |                   |                   |         |                     |                     |
	|----------------|----------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 01:23:34
	Running on machine: minikube5
	Binary: Built with gc go1.22.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 01:23:34.469815   12132 out.go:291] Setting OutFile to fd 1012 ...
	I0229 01:23:34.470816   12132 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:23:34.470816   12132 out.go:304] Setting ErrFile to fd 748...
	I0229 01:23:34.470816   12132 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:23:34.489825   12132 out.go:298] Setting JSON to false
	I0229 01:23:34.492831   12132 start.go:129] hostinfo: {"hostname":"minikube5","uptime":266041,"bootTime":1708903773,"procs":202,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0229 01:23:34.492831   12132 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 01:23:34.493831   12132 out.go:177] * [functional-583600] minikube v1.32.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0229 01:23:34.494817   12132 notify.go:220] Checking for updates...
	I0229 01:23:34.494817   12132 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 01:23:34.495832   12132 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 01:23:34.496832   12132 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0229 01:23:34.496832   12132 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 01:23:34.497824   12132 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	
	==> Docker <==
	Feb 29 01:23:33 functional-583600 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 01:23:33 functional-583600 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 01:23:33 functional-583600 dockerd[3098]: time="2024-02-29T01:23:33.578666445Z" level=info msg="Starting up"
	Feb 29 01:24:33 functional-583600 dockerd[3098]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Feb 29 01:24:33 functional-583600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Feb 29 01:24:33 functional-583600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Feb 29 01:24:33 functional-583600 systemd[1]: Failed to start Docker Application Container Engine.
	Feb 29 01:24:33 functional-583600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 7.
	Feb 29 01:24:33 functional-583600 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 01:24:33 functional-583600 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 01:24:33 functional-583600 dockerd[3502]: time="2024-02-29T01:24:33.846987241Z" level=info msg="Starting up"
	Feb 29 01:24:33 functional-583600 dockerd[3502]: time="2024-02-29T01:24:33.994255667Z" level=info msg="Processing signal 'terminated'"
	Feb 29 01:25:33 functional-583600 dockerd[3502]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Feb 29 01:25:33 functional-583600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Feb 29 01:25:33 functional-583600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Feb 29 01:25:33 functional-583600 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 01:25:33 functional-583600 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 01:25:33 functional-583600 dockerd[3618]: time="2024-02-29T01:25:33.971872237Z" level=info msg="Starting up"
	Feb 29 01:26:33 functional-583600 dockerd[3618]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Feb 29 01:26:33 functional-583600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Feb 29 01:26:33 functional-583600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Feb 29 01:26:33 functional-583600 systemd[1]: Failed to start Docker Application Container Engine.
	Feb 29 01:26:34 functional-583600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
	Feb 29 01:26:34 functional-583600 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 01:26:34 functional-583600 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-02-29T01:26:34Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	sudo: /var/lib/minikube/binaries/v1.28.4/kubectl: command not found
	
	
	==> dmesg <==
	[  +1.314197] systemd-fstab-generator[113]: Ignoring "noauto" option for root device
	[  +7.922365] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +40.130648] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.178961] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[Feb29 01:01] systemd-fstab-generator[927]: Ignoring "noauto" option for root device
	[  +0.108098] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.563333] systemd-fstab-generator[965]: Ignoring "noauto" option for root device
	[  +0.193255] systemd-fstab-generator[977]: Ignoring "noauto" option for root device
	[  +0.233535] systemd-fstab-generator[991]: Ignoring "noauto" option for root device
	[Feb29 01:03] systemd-fstab-generator[1349]: Ignoring "noauto" option for root device
	[  +0.118307] kauditd_printk_skb: 78 callbacks suppressed
	[  +0.434291] systemd-fstab-generator[1384]: Ignoring "noauto" option for root device
	[  +0.189853] systemd-fstab-generator[1396]: Ignoring "noauto" option for root device
	[  +0.241944] systemd-fstab-generator[1410]: Ignoring "noauto" option for root device
	[Feb29 01:17] systemd-fstab-generator[2297]: Ignoring "noauto" option for root device
	[  +0.100820] kauditd_printk_skb: 78 callbacks suppressed
	[  +0.424647] systemd-fstab-generator[2333]: Ignoring "noauto" option for root device
	[  +0.183560] systemd-fstab-generator[2345]: Ignoring "noauto" option for root device
	[  +0.226070] systemd-fstab-generator[2359]: Ignoring "noauto" option for root device
	[Feb29 01:23] systemd-fstab-generator[3227]: Ignoring "noauto" option for root device
	[  +0.148001] kauditd_printk_skb: 78 callbacks suppressed
	[Feb29 01:24] systemd-fstab-generator[3513]: Ignoring "noauto" option for root device
	[  +0.133708] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 01:27:34 up 27 min,  0 users,  load average: 0.00, 0.00, 0.00
	Linux functional-583600 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	-- No entries --
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 01:24:46.582402   11120 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0229 01:25:33.692027   11120 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0229 01:26:33.813063   11120 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0229 01:26:33.854291   11120 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0229 01:26:33.905507   11120 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0229 01:26:33.940492   11120 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0229 01:26:33.981494   11120 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0229 01:26:34.022327   11120 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-583600 -n functional-583600
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-583600 -n functional-583600: exit status 2 (11.9643021s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 01:27:34.405663    4272 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-583600" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (248.60s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (11.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-583600 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:218: (dbg) Non-zero exit: kubectl --context functional-583600 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (112.6433ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-583600

                                                
                                                
** /stderr **
functional_test.go:220: failed to 'kubectl get nodes' with args "kubectl --context functional-583600 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:226: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-583600

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-583600

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-583600

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-583600

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-583600

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-583600 -n functional-583600
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-583600 -n functional-583600: exit status 6 (11.7461742s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 01:23:59.539607    8540 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0229 01:24:11.114306    8540 status.go:415] kubeconfig endpoint: extract IP: "functional-583600" does not appear in C:\Users\jenkins.minikube5\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "functional-583600" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestFunctional/parallel/NodeLabels (11.87s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-583600 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1435: (dbg) Non-zero exit: kubectl --context functional-583600 create deployment hello-node --image=registry.k8s.io/echoserver:1.8: exit status 1 (117.3049ms)

                                                
                                                
** stderr ** 
	error: context "functional-583600" does not exist

                                                
                                                
** /stderr **
functional_test.go:1439: failed to create hello-node deployment with this command "kubectl --context functional-583600 create deployment hello-node --image=registry.k8s.io/echoserver:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (9.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 service list
functional_test.go:1455: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-583600 service list: exit status 119 (9.1621799s)

                                                
                                                
-- stdout --
	* This control plane is not running! (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-583600"

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 01:22:33.525347    1076 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! This is unusual - you may want to investigate using "minikube logs -p functional-583600"

                                                
                                                
** /stderr **
functional_test.go:1457: failed to do service list. args "out/minikube-windows-amd64.exe -p functional-583600 service list" : exit status 119
functional_test.go:1460: expected 'service list' to contain *hello-node* but got -"* This control plane is not running! (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-583600\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (9.16s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (9.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-583600 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-583600 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 119. stderr: W0229 01:22:35.180622   14072 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0229 01:22:35.281625   14072 out.go:291] Setting OutFile to fd 956 ...
I0229 01:22:35.291620   14072 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 01:22:35.291620   14072 out.go:304] Setting ErrFile to fd 748...
I0229 01:22:35.291620   14072 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 01:22:35.310637   14072 mustload.go:65] Loading cluster: functional-583600
I0229 01:22:35.311633   14072 config.go:182] Loaded profile config "functional-583600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 01:22:35.312639   14072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-583600 ).state
I0229 01:22:38.301811   14072 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0229 01:22:38.301811   14072 main.go:141] libmachine: [stderr =====>] : 
I0229 01:22:38.301811   14072 host.go:66] Checking if "functional-583600" exists ...
I0229 01:22:38.302808   14072 api_server.go:166] Checking apiserver status ...
I0229 01:22:38.312406   14072 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0229 01:22:38.312406   14072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-583600 ).state
I0229 01:22:41.056202   14072 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0229 01:22:41.056202   14072 main.go:141] libmachine: [stderr =====>] : 
I0229 01:22:41.056202   14072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-583600 ).networkadapters[0]).ipaddresses[0]
I0229 01:22:43.882104   14072 main.go:141] libmachine: [stdout =====>] : 172.19.5.240

                                                
                                                
I0229 01:22:43.882104   14072 main.go:141] libmachine: [stderr =====>] : 
I0229 01:22:43.882104   14072 sshutil.go:53] new ssh client: &{IP:172.19.5.240 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-583600\id_rsa Username:docker}
I0229 01:22:44.010867   14072 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.6981421s)
W0229 01:22:44.010867   14072 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I0229 01:22:44.012309   14072 out.go:177] * This control plane is not running! (state=Stopped)
W0229 01:22:44.012944   14072 out.go:239] ! This is unusual - you may want to investigate using "minikube logs -p functional-583600"
! This is unusual - you may want to investigate using "minikube logs -p functional-583600"
I0229 01:22:44.013844   14072 out.go:177]   To start a cluster, run: "minikube start -p functional-583600"

                                                
                                                
stdout: * This control plane is not running! (state=Stopped)
To start a cluster, run: "minikube start -p functional-583600"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-583600 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-windows-amd64.exe -p functional-583600 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-windows-amd64.exe -p functional-583600 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-583600 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 5748: OpenProcess: The parameter is incorrect.
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-windows-amd64.exe -p functional-583600 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-windows-amd64.exe -p functional-583600 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (9.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (8.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 service list -o json
functional_test.go:1485: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-583600 service list -o json: exit status 119 (8.030237s)

                                                
                                                
-- stdout --
	* This control plane is not running! (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-583600"

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 01:22:42.736444    2428 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! This is unusual - you may want to investigate using "minikube logs -p functional-583600"

                                                
                                                
** /stderr **
functional_test.go:1487: failed to list services with json format. args "out/minikube-windows-amd64.exe -p functional-583600 service list -o json": exit status 119
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (8.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-583600": client config: context "functional-583600" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (7.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-583600 service --namespace=default --https --url hello-node: exit status 119 (7.7811946s)

                                                
                                                
-- stdout --
	* This control plane is not running! (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-583600"

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 01:22:50.742301   12416 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! This is unusual - you may want to investigate using "minikube logs -p functional-583600"

                                                
                                                
** /stderr **
functional_test.go:1507: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-583600 service --namespace=default --https --url hello-node" : exit status 119
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (7.78s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (7.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-583600 service hello-node --url --format={{.IP}}: exit status 119 (7.8186933s)

                                                
                                                
-- stdout --
	* This control plane is not running! (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-583600"

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 01:22:58.522416    4460 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! This is unusual - you may want to investigate using "minikube logs -p functional-583600"

                                                
                                                
** /stderr **
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-windows-amd64.exe -p functional-583600 service hello-node --url --format={{.IP}}": exit status 119
functional_test.go:1544: "* This control plane is not running! (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-583600\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (7.82s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (7.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-583600 service hello-node --url: exit status 119 (7.8477444s)

                                                
                                                
-- stdout --
	* This control plane is not running! (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-583600"

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 01:23:06.350095   10404 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! This is unusual - you may want to investigate using "minikube logs -p functional-583600"

                                                
                                                
** /stderr **
functional_test.go:1557: failed to get service url. args: "out/minikube-windows-amd64.exe -p functional-583600 service hello-node --url": exit status 119
functional_test.go:1561: found endpoint for hello-node: * This control plane is not running! (state=Stopped)
To start a cluster, run: "minikube start -p functional-583600"
functional_test.go:1565: failed to parse "* This control plane is not running! (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-583600\"": parse "* This control plane is not running! (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-583600\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (7.85s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (477.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:495: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-583600 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-583600"
functional_test.go:495: (dbg) Non-zero exit: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-583600 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-583600": exit status 1 (7m57.2023218s)

                                                
                                                
** stderr ** 
	W0229 01:23:40.293331   13560 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to MK_DOCKER_SCRIPT: Error generating set output: write /dev/stdout: The pipe is being closed.
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube_docker-env_1f5562ba2f20b73b531869f0520020e4bb661a3b_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	E0229 01:31:35.449189   13560 out.go:190] Fprintf failed: write /dev/stdout: The pipe is being closed.

                                                
                                                
** /stderr **
functional_test.go:498: failed to run the command by deadline. exceeded timeout. powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-583600 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-583600"
functional_test.go:501: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/powershell (477.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (2.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-583600 update-context --alsologtostderr -v=2: (2.3501479s)
functional_test.go:2122: update-context: got="* \"functional-583600\" context has been updated to point to 172.19.5.240:8441\n* Current context is \"functional-583600\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (2.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (60.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 image ls --format short --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-583600 image ls --format short --alsologtostderr: (1m0.0212897s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-583600 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-583600 image ls --format short --alsologtostderr:
W0229 01:33:36.315149    8876 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0229 01:33:36.397156    8876 out.go:291] Setting OutFile to fd 984 ...
I0229 01:33:36.412146    8876 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 01:33:36.412146    8876 out.go:304] Setting ErrFile to fd 760...
I0229 01:33:36.412146    8876 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 01:33:36.428145    8876 config.go:182] Loaded profile config "functional-583600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 01:33:36.428145    8876 config.go:182] Loaded profile config "functional-583600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 01:33:36.429143    8876 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-583600 ).state
I0229 01:33:38.722188    8876 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0229 01:33:38.722725    8876 main.go:141] libmachine: [stderr =====>] : 
I0229 01:33:38.737703    8876 ssh_runner.go:195] Run: systemctl --version
I0229 01:33:38.737763    8876 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-583600 ).state
I0229 01:33:41.065088    8876 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0229 01:33:41.065088    8876 main.go:141] libmachine: [stderr =====>] : 
I0229 01:33:41.065088    8876 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-583600 ).networkadapters[0]).ipaddresses[0]
I0229 01:33:43.653985    8876 main.go:141] libmachine: [stdout =====>] : 172.19.5.240

                                                
                                                
I0229 01:33:43.653985    8876 main.go:141] libmachine: [stderr =====>] : 
I0229 01:33:43.654766    8876 sshutil.go:53] new ssh client: &{IP:172.19.5.240 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-583600\id_rsa Username:docker}
I0229 01:33:43.752946    8876 ssh_runner.go:235] Completed: systemctl --version: (5.0149639s)
I0229 01:33:43.763126    8876 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0229 01:34:36.157730    8876 ssh_runner.go:235] Completed: docker images --no-trunc --format "{{json .}}": (52.3916135s)
W0229 01:34:36.157853    8876 cache_images.go:715] Failed to list images for profile functional-583600 docker images: docker images --no-trunc --format "{{json .}}": Process exited with status 1
stdout:

                                                
                                                
stderr:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
functional_test.go:274: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (60.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (60.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 image ls --format table --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-583600 image ls --format table --alsologtostderr: (1m0.1309299s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-583600 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-583600 image ls --format table --alsologtostderr:
W0229 01:34:36.333779    7996 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0229 01:34:36.399788    7996 out.go:291] Setting OutFile to fd 780 ...
I0229 01:34:36.399788    7996 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 01:34:36.399788    7996 out.go:304] Setting ErrFile to fd 868...
I0229 01:34:36.399788    7996 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 01:34:36.414802    7996 config.go:182] Loaded profile config "functional-583600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 01:34:36.414802    7996 config.go:182] Loaded profile config "functional-583600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 01:34:36.415787    7996 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-583600 ).state
I0229 01:34:38.773862    7996 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0229 01:34:38.773862    7996 main.go:141] libmachine: [stderr =====>] : 
I0229 01:34:38.785218    7996 ssh_runner.go:195] Run: systemctl --version
I0229 01:34:38.786216    7996 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-583600 ).state
I0229 01:34:41.034539    7996 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0229 01:34:41.034615    7996 main.go:141] libmachine: [stderr =====>] : 
I0229 01:34:41.034691    7996 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-583600 ).networkadapters[0]).ipaddresses[0]
I0229 01:34:43.557750    7996 main.go:141] libmachine: [stdout =====>] : 172.19.5.240

                                                
                                                
I0229 01:34:43.557750    7996 main.go:141] libmachine: [stderr =====>] : 
I0229 01:34:43.558207    7996 sshutil.go:53] new ssh client: &{IP:172.19.5.240 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-583600\id_rsa Username:docker}
I0229 01:34:43.659993    7996 ssh_runner.go:235] Completed: systemctl --version: (4.8744497s)
I0229 01:34:43.671537    7996 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0229 01:35:36.341384    7996 ssh_runner.go:235] Completed: docker images --no-trunc --format "{{json .}}": (52.6669117s)
W0229 01:35:36.341532    7996 cache_images.go:715] Failed to list images for profile functional-583600 docker images: docker images --no-trunc --format "{{json .}}": Process exited with status 1
stdout:

                                                
                                                
stderr:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
functional_test.go:274: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (60.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (59.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 image ls --format json --alsologtostderr
E0229 01:34:28.633239    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-583600 image ls --format json --alsologtostderr: (59.9722291s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-583600 image ls --format json --alsologtostderr:
[]
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-583600 image ls --format json --alsologtostderr:
W0229 01:33:36.312151    8480 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0229 01:33:36.393158    8480 out.go:291] Setting OutFile to fd 516 ...
I0229 01:33:36.394149    8480 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 01:33:36.394149    8480 out.go:304] Setting ErrFile to fd 816...
I0229 01:33:36.394149    8480 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 01:33:36.410163    8480 config.go:182] Loaded profile config "functional-583600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 01:33:36.410163    8480 config.go:182] Loaded profile config "functional-583600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 01:33:36.411149    8480 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-583600 ).state
I0229 01:33:38.728623    8480 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0229 01:33:38.728623    8480 main.go:141] libmachine: [stderr =====>] : 
I0229 01:33:38.742660    8480 ssh_runner.go:195] Run: systemctl --version
I0229 01:33:38.743682    8480 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-583600 ).state
I0229 01:33:41.083164    8480 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0229 01:33:41.084153    8480 main.go:141] libmachine: [stderr =====>] : 
I0229 01:33:41.084153    8480 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-583600 ).networkadapters[0]).ipaddresses[0]
I0229 01:33:43.624781    8480 main.go:141] libmachine: [stdout =====>] : 172.19.5.240

                                                
                                                
I0229 01:33:43.624827    8480 main.go:141] libmachine: [stderr =====>] : 
I0229 01:33:43.624827    8480 sshutil.go:53] new ssh client: &{IP:172.19.5.240 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-583600\id_rsa Username:docker}
I0229 01:33:43.722019    8480 ssh_runner.go:235] Completed: systemctl --version: (4.9790819s)
I0229 01:33:43.729754    8480 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0229 01:34:36.156979    8480 ssh_runner.go:235] Completed: docker images --no-trunc --format "{{json .}}": (52.4243084s)
W0229 01:34:36.157047    8480 cache_images.go:715] Failed to list images for profile functional-583600 docker images: docker images --no-trunc --format "{{json .}}": Process exited with status 1
stdout:

                                                
                                                
stderr:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
functional_test.go:274: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (59.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (60.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 image ls --format yaml --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-583600 image ls --format yaml --alsologtostderr: (1m0.0332939s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-583600 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-583600 image ls --format yaml --alsologtostderr:
W0229 01:33:36.312151    6300 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0229 01:33:36.393158    6300 out.go:291] Setting OutFile to fd 580 ...
I0229 01:33:36.394149    6300 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 01:33:36.394149    6300 out.go:304] Setting ErrFile to fd 1012...
I0229 01:33:36.394149    6300 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 01:33:36.409160    6300 config.go:182] Loaded profile config "functional-583600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 01:33:36.410163    6300 config.go:182] Loaded profile config "functional-583600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 01:33:36.410163    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-583600 ).state
I0229 01:33:38.717897    6300 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0229 01:33:38.717897    6300 main.go:141] libmachine: [stderr =====>] : 
I0229 01:33:38.742660    6300 ssh_runner.go:195] Run: systemctl --version
I0229 01:33:38.742660    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-583600 ).state
I0229 01:33:41.113205    6300 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0229 01:33:41.113330    6300 main.go:141] libmachine: [stderr =====>] : 
I0229 01:33:41.113330    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-583600 ).networkadapters[0]).ipaddresses[0]
I0229 01:33:43.675443    6300 main.go:141] libmachine: [stdout =====>] : 172.19.5.240

                                                
                                                
I0229 01:33:43.675443    6300 main.go:141] libmachine: [stderr =====>] : 
I0229 01:33:43.675688    6300 sshutil.go:53] new ssh client: &{IP:172.19.5.240 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-583600\id_rsa Username:docker}
I0229 01:33:43.771956    6300 ssh_runner.go:235] Completed: systemctl --version: (5.0290166s)
I0229 01:33:43.783156    6300 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0229 01:34:36.165884    6300 ssh_runner.go:235] Completed: docker images --no-trunc --format "{{json .}}": (52.3798142s)
W0229 01:34:36.166511    6300 cache_images.go:715] Failed to list images for profile functional-583600 docker images: docker images --no-trunc --format "{{json .}}": Process exited with status 1
stdout:

                                                
                                                
stderr:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
functional_test.go:274: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (60.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (120.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-583600 ssh pgrep buildkitd: exit status 1 (9.5390792s)

                                                
                                                
** stderr ** 
	W0229 01:34:36.286786    5280 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 image build -t localhost/my-image:functional-583600 testdata\build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe -p functional-583600 image build -t localhost/my-image:functional-583600 testdata\build --alsologtostderr: (50.6852159s)
functional_test.go:322: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-583600 image build -t localhost/my-image:functional-583600 testdata\build --alsologtostderr:
W0229 01:34:45.829781   13112 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0229 01:34:45.884778   13112 out.go:291] Setting OutFile to fd 1064 ...
I0229 01:34:45.899917   13112 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 01:34:45.899917   13112 out.go:304] Setting ErrFile to fd 1076...
I0229 01:34:45.899917   13112 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 01:34:45.917727   13112 config.go:182] Loaded profile config "functional-583600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 01:34:45.934676   13112 config.go:182] Loaded profile config "functional-583600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 01:34:45.935526   13112 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-583600 ).state
I0229 01:34:48.030680   13112 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0229 01:34:48.030680   13112 main.go:141] libmachine: [stderr =====>] : 
I0229 01:34:48.040159   13112 ssh_runner.go:195] Run: systemctl --version
I0229 01:34:48.040683   13112 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-583600 ).state
I0229 01:34:50.040362   13112 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0229 01:34:50.040362   13112 main.go:141] libmachine: [stderr =====>] : 
I0229 01:34:50.040362   13112 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-583600 ).networkadapters[0]).ipaddresses[0]
I0229 01:34:52.447426   13112 main.go:141] libmachine: [stdout =====>] : 172.19.5.240

                                                
                                                
I0229 01:34:52.447503   13112 main.go:141] libmachine: [stderr =====>] : 
I0229 01:34:52.447906   13112 sshutil.go:53] new ssh client: &{IP:172.19.5.240 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-583600\id_rsa Username:docker}
I0229 01:34:52.538920   13112 ssh_runner.go:235] Completed: systemctl --version: (4.4985111s)
I0229 01:34:52.538920   13112 build_images.go:151] Building image from path: C:\Users\jenkins.minikube5\AppData\Local\Temp\build.3625674958.tar
I0229 01:34:52.550344   13112 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0229 01:34:52.581639   13112 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3625674958.tar
I0229 01:34:52.587341   13112 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3625674958.tar: stat -c "%s %y" /var/lib/minikube/build/build.3625674958.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3625674958.tar': No such file or directory
I0229 01:34:52.587341   13112 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\AppData\Local\Temp\build.3625674958.tar --> /var/lib/minikube/build/build.3625674958.tar (3072 bytes)
I0229 01:34:52.655087   13112 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3625674958
I0229 01:34:52.684015   13112 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3625674958 -xf /var/lib/minikube/build/build.3625674958.tar
I0229 01:34:52.702595   13112 docker.go:360] Building image: /var/lib/minikube/build/build.3625674958
I0229 01:34:52.710122   13112 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-583600 /var/lib/minikube/build/build.3625674958
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I0229 01:35:36.344846   13112 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-583600 /var/lib/minikube/build/build.3625674958: (43.6322631s)
W0229 01:35:36.344846   13112 build_images.go:115] Failed to build image for profile functional-583600. make sure the profile is running. Docker build /var/lib/minikube/build/build.3625674958.tar: buildimage docker: docker build -t localhost/my-image:functional-583600 /var/lib/minikube/build/build.3625674958: Process exited with status 1
stdout:

                                                
                                                
stderr:
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I0229 01:35:36.344846   13112 build_images.go:123] succeeded building to: 
I0229 01:35:36.344846   13112 build_images.go:124] failed building to: functional-583600
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-583600 image ls: (1m0.2971781s)
functional_test.go:442: expected "localhost/my-image:functional-583600" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (120.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (102.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 image load --daemon gcr.io/google-containers/addon-resizer:functional-583600 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-windows-amd64.exe -p functional-583600 image load --daemon gcr.io/google-containers/addon-resizer:functional-583600 --alsologtostderr: (42.0352288s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-583600 image ls: (1m0.2605029s)
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-583600" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (102.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (120.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 image load --daemon gcr.io/google-containers/addon-resizer:functional-583600 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-windows-amd64.exe -p functional-583600 image load --daemon gcr.io/google-containers/addon-resizer:functional-583600 --alsologtostderr: (1m0.1338754s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 image ls
E0229 01:27:31.838670    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-583600 image ls: (1m0.3221045s)
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-583600" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (120.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (120.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (3.1384571s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-583600
functional_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 image load --daemon gcr.io/google-containers/addon-resizer:functional-583600 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-windows-amd64.exe -p functional-583600 image load --daemon gcr.io/google-containers/addon-resizer:functional-583600 --alsologtostderr: (56.87234s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 image ls
E0229 01:29:28.614264    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-583600 image ls: (1m0.2980188s)
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-583600" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (120.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (60.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 image save gcr.io/google-containers/addon-resizer:functional-583600 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-windows-amd64.exe -p functional-583600 image save gcr.io/google-containers/addon-resizer:functional-583600 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (1m0.3337033s)
functional_test.go:385: expected "C:\\jenkins\\workspace\\Hyper-V_Windows_integration\\addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (60.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-583600 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: exit status 80 (361.6795ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 01:32:35.827061    6684 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0229 01:32:35.892575    6684 out.go:291] Setting OutFile to fd 856 ...
	I0229 01:32:35.908937    6684 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:32:35.908937    6684 out.go:304] Setting ErrFile to fd 984...
	I0229 01:32:35.908937    6684 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:32:35.925440    6684 config.go:182] Loaded profile config "functional-583600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 01:32:35.925440    6684 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\C_\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar
	I0229 01:32:36.038578    6684 cache.go:107] acquiring lock: {Name:mk973b015b4af00973c8f7caa2e3032e512e6a8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 01:32:36.040575    6684 cache.go:96] cache image "C:\\jenkins\\workspace\\Hyper-V_Windows_integration\\addon-resizer-save.tar" -> "C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\cache\\images\\amd64\\C_\\jenkins\\workspace\\Hyper-V_Windows_integration\\addon-resizer-save.tar" took 115.1286ms
	I0229 01:32:36.042945    6684 out.go:177] 
	W0229 01:32:36.043832    6684 out.go:239] X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\cache\\images\\amd64\\C_\\jenkins\\workspace\\Hyper-V_Windows_integration\\addon-resizer-save.tar": parsing image ref name for C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar: could not parse reference: C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar
	X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\cache\\images\\amd64\\C_\\jenkins\\workspace\\Hyper-V_Windows_integration\\addon-resizer-save.tar": parsing image ref name for C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar: could not parse reference: C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar
	W0229 01:32:36.043832    6684 out.go:239] * 
	* 
	W0229 01:32:36.050253    6684 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube_image_31f64792f90da1502c14438d665d6a1efcaad7c2_0.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube_image_31f64792f90da1502c14438d665d6a1efcaad7c2_0.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 01:32:36.050705    6684 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:410: loading image into minikube from file: exit status 80

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 01:32:35.827061    6684 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0229 01:32:35.892575    6684 out.go:291] Setting OutFile to fd 856 ...
	I0229 01:32:35.908937    6684 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:32:35.908937    6684 out.go:304] Setting ErrFile to fd 984...
	I0229 01:32:35.908937    6684 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:32:35.925440    6684 config.go:182] Loaded profile config "functional-583600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 01:32:35.925440    6684 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\C_\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar
	I0229 01:32:36.038578    6684 cache.go:107] acquiring lock: {Name:mk973b015b4af00973c8f7caa2e3032e512e6a8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 01:32:36.040575    6684 cache.go:96] cache image "C:\\jenkins\\workspace\\Hyper-V_Windows_integration\\addon-resizer-save.tar" -> "C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\cache\\images\\amd64\\C_\\jenkins\\workspace\\Hyper-V_Windows_integration\\addon-resizer-save.tar" took 115.1286ms
	I0229 01:32:36.042945    6684 out.go:177] 
	W0229 01:32:36.043832    6684 out.go:239] X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\cache\\images\\amd64\\C_\\jenkins\\workspace\\Hyper-V_Windows_integration\\addon-resizer-save.tar": parsing image ref name for C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar: could not parse reference: C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar
	X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\cache\\images\\amd64\\C_\\jenkins\\workspace\\Hyper-V_Windows_integration\\addon-resizer-save.tar": parsing image ref name for C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar: could not parse reference: C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar
	W0229 01:32:36.043832    6684 out.go:239] * 
	* 
	W0229 01:32:36.050253    6684 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube_image_31f64792f90da1502c14438d665d6a1efcaad7c2_0.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube_image_31f64792f90da1502c14438d665d6a1efcaad7c2_0.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 01:32:36.050705    6684 out.go:177] 

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.36s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (402.47s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-windows-amd64.exe start -p ingress-addon-legacy-589700 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperv
E0229 01:44:11.909172    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
E0229 01:44:28.663676    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p ingress-addon-legacy-589700 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperv: exit status 109 (6m42.1337785s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-589700] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18063
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting control plane node ingress-addon-legacy-589700 in cluster ingress-addon-legacy-589700
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating hyperv VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 24.0.7 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	X Problems detected in kubelet:
	  Feb 29 01:48:40 ingress-addon-legacy-589700 kubelet[37392]: F0229 01:48:40.140244   37392 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	  Feb 29 01:48:41 ingress-addon-legacy-589700 kubelet[37588]: F0229 01:48:41.355076   37588 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	  Feb 29 01:48:42 ingress-addon-legacy-589700 kubelet[37781]: F0229 01:48:42.589380   37781 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 01:42:05.724729   12656 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0229 01:42:05.783767   12656 out.go:291] Setting OutFile to fd 1224 ...
	I0229 01:42:05.784087   12656 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:42:05.784087   12656 out.go:304] Setting ErrFile to fd 1228...
	I0229 01:42:05.784647   12656 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:42:05.804080   12656 out.go:298] Setting JSON to false
	I0229 01:42:05.807585   12656 start.go:129] hostinfo: {"hostname":"minikube5","uptime":267152,"bootTime":1708903773,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0229 01:42:05.807585   12656 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 01:42:05.809802   12656 out.go:177] * [ingress-addon-legacy-589700] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0229 01:42:05.810906   12656 notify.go:220] Checking for updates...
	I0229 01:42:05.811014   12656 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 01:42:05.811598   12656 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 01:42:05.811821   12656 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0229 01:42:05.812425   12656 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 01:42:05.813109   12656 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 01:42:05.814446   12656 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 01:42:10.835384   12656 out.go:177] * Using the hyperv driver based on user configuration
	I0229 01:42:10.835964   12656 start.go:299] selected driver: hyperv
	I0229 01:42:10.835964   12656 start.go:903] validating driver "hyperv" against <nil>
	I0229 01:42:10.835964   12656 start.go:914] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 01:42:10.883116   12656 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 01:42:10.884024   12656 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 01:42:10.884024   12656 cni.go:84] Creating CNI manager for ""
	I0229 01:42:10.884024   12656 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0229 01:42:10.884024   12656 start_flags.go:323] config:
	{Name:ingress-addon-legacy-589700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-589700 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:
1m0s}
	I0229 01:42:10.884979   12656 iso.go:125] acquiring lock: {Name:mk91f2ee29fbed5605669750e8cfa308a1229357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 01:42:10.886213   12656 out.go:177] * Starting control plane node ingress-addon-legacy-589700 in cluster ingress-addon-legacy-589700
	I0229 01:42:10.886213   12656 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0229 01:42:10.966237   12656 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0229 01:42:10.966807   12656 cache.go:56] Caching tarball of preloaded images
	I0229 01:42:10.967308   12656 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0229 01:42:10.968152   12656 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0229 01:42:10.968795   12656 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0229 01:42:11.039522   12656 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0229 01:42:15.437050   12656 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0229 01:42:15.438243   12656 preload.go:256] verifying checksum of C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0229 01:42:16.544744   12656 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0229 01:42:16.545564   12656 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-589700\config.json ...
	I0229 01:42:16.546294   12656 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-589700\config.json: {Name:mkfe627708e9dcad871ea6b42d4334b4b9ddbd5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:42:16.547669   12656 start.go:365] acquiring machines lock for ingress-addon-legacy-589700: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 01:42:16.548030   12656 start.go:369] acquired machines lock for "ingress-addon-legacy-589700" in 360.3µs
	I0229 01:42:16.548030   12656 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-589700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-589700 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 01:42:16.548386   12656 start.go:125] createHost starting for "" (driver="hyperv")
	I0229 01:42:16.549332   12656 out.go:204] * Creating hyperv VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0229 01:42:16.549332   12656 start.go:159] libmachine.API.Create for "ingress-addon-legacy-589700" (driver="hyperv")
	I0229 01:42:16.549332   12656 client.go:168] LocalClient.Create starting
	I0229 01:42:16.550484   12656 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0229 01:42:16.560177   12656 main.go:141] libmachine: Decoding PEM data...
	I0229 01:42:16.560177   12656 main.go:141] libmachine: Parsing certificate...
	I0229 01:42:16.560177   12656 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0229 01:42:16.568495   12656 main.go:141] libmachine: Decoding PEM data...
	I0229 01:42:16.568495   12656 main.go:141] libmachine: Parsing certificate...
	I0229 01:42:16.568495   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0229 01:42:18.559489   12656 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0229 01:42:18.559566   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:42:18.559637   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0229 01:42:20.242248   12656 main.go:141] libmachine: [stdout =====>] : False
	
	I0229 01:42:20.242248   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:42:20.242248   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0229 01:42:21.665843   12656 main.go:141] libmachine: [stdout =====>] : True
	
	I0229 01:42:21.666052   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:42:21.666052   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0229 01:42:25.116319   12656 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0229 01:42:25.116319   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:42:25.118710   12656 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0229 01:42:25.523898   12656 main.go:141] libmachine: Creating SSH key...
	I0229 01:42:25.723795   12656 main.go:141] libmachine: Creating VM...
	I0229 01:42:25.723795   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0229 01:42:28.370059   12656 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0229 01:42:28.370468   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:42:28.370468   12656 main.go:141] libmachine: Using switch "Default Switch"
	I0229 01:42:28.370468   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0229 01:42:30.047512   12656 main.go:141] libmachine: [stdout =====>] : True
	
	I0229 01:42:30.047512   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:42:30.047635   12656 main.go:141] libmachine: Creating VHD
	I0229 01:42:30.047776   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ingress-addon-legacy-589700\fixed.vhd' -SizeBytes 10MB -Fixed
	I0229 01:42:33.680536   12656 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ingress-addon-legacy-58970
	                          0\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 84CE1A9D-021A-4B76-9296-C33730F3EFA7
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0229 01:42:33.681057   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:42:33.681057   12656 main.go:141] libmachine: Writing magic tar header
	I0229 01:42:33.681126   12656 main.go:141] libmachine: Writing SSH key tar header
	I0229 01:42:33.693187   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ingress-addon-legacy-589700\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ingress-addon-legacy-589700\disk.vhd' -VHDType Dynamic -DeleteSource
	I0229 01:42:36.734803   12656 main.go:141] libmachine: [stdout =====>] : 
	I0229 01:42:36.734803   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:42:36.734803   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ingress-addon-legacy-589700\disk.vhd' -SizeBytes 20000MB
	I0229 01:42:39.145995   12656 main.go:141] libmachine: [stdout =====>] : 
	I0229 01:42:39.145995   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:42:39.145995   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ingress-addon-legacy-589700 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ingress-addon-legacy-589700' -SwitchName 'Default Switch' -MemoryStartupBytes 4096MB
	I0229 01:42:43.007690   12656 main.go:141] libmachine: [stdout =====>] : 
	Name                        State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----                        ----- ----------- ----------------- ------   ------             -------
	ingress-addon-legacy-589700 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0229 01:42:43.007690   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:42:43.007788   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ingress-addon-legacy-589700 -DynamicMemoryEnabled $false
	I0229 01:42:45.133724   12656 main.go:141] libmachine: [stdout =====>] : 
	I0229 01:42:45.133724   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:42:45.134104   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ingress-addon-legacy-589700 -Count 2
	I0229 01:42:47.200871   12656 main.go:141] libmachine: [stdout =====>] : 
	I0229 01:42:47.200871   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:42:47.200957   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ingress-addon-legacy-589700 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ingress-addon-legacy-589700\boot2docker.iso'
	I0229 01:42:49.599465   12656 main.go:141] libmachine: [stdout =====>] : 
	I0229 01:42:49.599732   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:42:49.599732   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ingress-addon-legacy-589700 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ingress-addon-legacy-589700\disk.vhd'
	I0229 01:42:51.992314   12656 main.go:141] libmachine: [stdout =====>] : 
	I0229 01:42:51.992980   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:42:51.992980   12656 main.go:141] libmachine: Starting VM...
	I0229 01:42:51.993056   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ingress-addon-legacy-589700
	I0229 01:42:54.668278   12656 main.go:141] libmachine: [stdout =====>] : 
	I0229 01:42:54.668278   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:42:54.668278   12656 main.go:141] libmachine: Waiting for host to start...
	I0229 01:42:54.668374   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-589700 ).state
	I0229 01:42:56.789279   12656 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 01:42:56.789473   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:42:56.789473   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-589700 ).networkadapters[0]).ipaddresses[0]
	I0229 01:42:59.134499   12656 main.go:141] libmachine: [stdout =====>] : 
	I0229 01:42:59.134499   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:43:00.135203   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-589700 ).state
	I0229 01:43:02.182400   12656 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 01:43:02.182400   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:43:02.182400   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-589700 ).networkadapters[0]).ipaddresses[0]
	I0229 01:43:04.564169   12656 main.go:141] libmachine: [stdout =====>] : 
	I0229 01:43:04.564220   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:43:05.565810   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-589700 ).state
	I0229 01:43:07.619087   12656 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 01:43:07.619087   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:43:07.619314   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-589700 ).networkadapters[0]).ipaddresses[0]
	I0229 01:43:09.934600   12656 main.go:141] libmachine: [stdout =====>] : 
	I0229 01:43:09.934661   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:43:10.949039   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-589700 ).state
	I0229 01:43:13.013710   12656 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 01:43:13.013710   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:43:13.014255   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-589700 ).networkadapters[0]).ipaddresses[0]
	I0229 01:43:15.367558   12656 main.go:141] libmachine: [stdout =====>] : 
	I0229 01:43:15.367558   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:43:16.372126   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-589700 ).state
	I0229 01:43:18.389000   12656 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 01:43:18.389000   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:43:18.389934   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-589700 ).networkadapters[0]).ipaddresses[0]
	I0229 01:43:20.794478   12656 main.go:141] libmachine: [stdout =====>] : 172.19.11.152
	
	I0229 01:43:20.794478   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:43:20.795402   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-589700 ).state
	I0229 01:43:22.821143   12656 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 01:43:22.821143   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:43:22.821143   12656 machine.go:88] provisioning docker machine ...
	I0229 01:43:22.821601   12656 buildroot.go:166] provisioning hostname "ingress-addon-legacy-589700"
	I0229 01:43:22.821670   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-589700 ).state
	I0229 01:43:24.879136   12656 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 01:43:24.879216   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:43:24.879216   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-589700 ).networkadapters[0]).ipaddresses[0]
	I0229 01:43:27.281104   12656 main.go:141] libmachine: [stdout =====>] : 172.19.11.152
	
	I0229 01:43:27.281325   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:43:27.285357   12656 main.go:141] libmachine: Using SSH client type: native
	I0229 01:43:27.295332   12656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.11.152 22 <nil> <nil>}
	I0229 01:43:27.295332   12656 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-589700 && echo "ingress-addon-legacy-589700" | sudo tee /etc/hostname
	I0229 01:43:27.462832   12656 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-589700
	
	I0229 01:43:27.463369   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-589700 ).state
	I0229 01:43:29.512767   12656 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 01:43:29.512767   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:43:29.513319   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-589700 ).networkadapters[0]).ipaddresses[0]
	I0229 01:43:31.939383   12656 main.go:141] libmachine: [stdout =====>] : 172.19.11.152
	
	I0229 01:43:31.940236   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:43:31.944476   12656 main.go:141] libmachine: Using SSH client type: native
	I0229 01:43:31.944476   12656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.11.152 22 <nil> <nil>}
	I0229 01:43:31.944476   12656 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-589700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-589700/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-589700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 01:43:32.096345   12656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 01:43:32.096345   12656 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0229 01:43:32.096345   12656 buildroot.go:174] setting up certificates
	I0229 01:43:32.096345   12656 provision.go:83] configureAuth start
	I0229 01:43:32.096345   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-589700 ).state
	I0229 01:43:34.144772   12656 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 01:43:34.144772   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:43:34.144772   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-589700 ).networkadapters[0]).ipaddresses[0]
	I0229 01:43:36.569153   12656 main.go:141] libmachine: [stdout =====>] : 172.19.11.152
	
	I0229 01:43:36.569225   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:43:36.569225   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-589700 ).state
	I0229 01:43:38.546668   12656 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 01:43:38.546668   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:43:38.546668   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-589700 ).networkadapters[0]).ipaddresses[0]
	I0229 01:43:40.954846   12656 main.go:141] libmachine: [stdout =====>] : 172.19.11.152
	
	I0229 01:43:40.955143   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:43:40.955143   12656 provision.go:138] copyHostCerts
	I0229 01:43:40.955143   12656 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0229 01:43:40.955143   12656 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0229 01:43:40.955143   12656 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0229 01:43:40.955818   12656 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0229 01:43:40.956415   12656 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0229 01:43:40.957064   12656 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0229 01:43:40.957064   12656 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0229 01:43:40.957064   12656 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0229 01:43:40.957740   12656 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0229 01:43:40.967786   12656 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0229 01:43:40.967786   12656 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0229 01:43:40.968222   12656 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1675 bytes)
	I0229 01:43:40.968663   12656 provision.go:112] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ingress-addon-legacy-589700 san=[172.19.11.152 172.19.11.152 localhost 127.0.0.1 minikube ingress-addon-legacy-589700]
	I0229 01:43:41.109426   12656 provision.go:172] copyRemoteCerts
	I0229 01:43:41.122020   12656 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 01:43:41.122020   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-589700 ).state
	I0229 01:43:43.123319   12656 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 01:43:43.123319   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:43:43.123319   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-589700 ).networkadapters[0]).ipaddresses[0]
	I0229 01:43:45.528812   12656 main.go:141] libmachine: [stdout =====>] : 172.19.11.152
	
	I0229 01:43:45.529991   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:43:45.530541   12656 sshutil.go:53] new ssh client: &{IP:172.19.11.152 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ingress-addon-legacy-589700\id_rsa Username:docker}
	I0229 01:43:45.638392   12656 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5161206s)
	I0229 01:43:45.638392   12656 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0229 01:43:45.638930   12656 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 01:43:45.685515   12656 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0229 01:43:45.686141   12656 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1257 bytes)
	I0229 01:43:45.736627   12656 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0229 01:43:45.736854   12656 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 01:43:45.790826   12656 provision.go:86] duration metric: configureAuth took 13.6936507s
	I0229 01:43:45.790826   12656 buildroot.go:189] setting minikube options for container-runtime
	I0229 01:43:45.791350   12656 config.go:182] Loaded profile config "ingress-addon-legacy-589700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0229 01:43:45.791627   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-589700 ).state
	I0229 01:43:47.768048   12656 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 01:43:47.768048   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:43:47.768048   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-589700 ).networkadapters[0]).ipaddresses[0]
	I0229 01:43:50.170736   12656 main.go:141] libmachine: [stdout =====>] : 172.19.11.152
	
	I0229 01:43:50.171762   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:43:50.175771   12656 main.go:141] libmachine: Using SSH client type: native
	I0229 01:43:50.175949   12656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.11.152 22 <nil> <nil>}
	I0229 01:43:50.175949   12656 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 01:43:50.310839   12656 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0229 01:43:50.310839   12656 buildroot.go:70] root file system type: tmpfs
	I0229 01:43:50.310839   12656 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 01:43:50.310839   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-589700 ).state
	I0229 01:43:52.336969   12656 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 01:43:52.336969   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:43:52.337067   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-589700 ).networkadapters[0]).ipaddresses[0]
	I0229 01:43:54.735072   12656 main.go:141] libmachine: [stdout =====>] : 172.19.11.152
	
	I0229 01:43:54.735072   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:43:54.740725   12656 main.go:141] libmachine: Using SSH client type: native
	I0229 01:43:54.741391   12656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.11.152 22 <nil> <nil>}
	I0229 01:43:54.741391   12656 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 01:43:54.915028   12656 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 01:43:54.915255   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-589700 ).state
	I0229 01:43:56.923412   12656 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 01:43:56.923691   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:43:56.923691   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-589700 ).networkadapters[0]).ipaddresses[0]
	I0229 01:43:59.344794   12656 main.go:141] libmachine: [stdout =====>] : 172.19.11.152
	
	I0229 01:43:59.344935   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:43:59.348720   12656 main.go:141] libmachine: Using SSH client type: native
	I0229 01:43:59.349314   12656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.11.152 22 <nil> <nil>}
	I0229 01:43:59.349314   12656 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 01:44:00.412397   12656 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0229 01:44:00.413015   12656 machine.go:91] provisioned docker machine in 37.5893206s
	I0229 01:44:00.413077   12656 client.go:171] LocalClient.Create took 1m43.8579528s
	I0229 01:44:00.413077   12656 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-589700" took 1m43.8579528s
	I0229 01:44:00.413077   12656 start.go:300] post-start starting for "ingress-addon-legacy-589700" (driver="hyperv")
	I0229 01:44:00.413077   12656 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 01:44:00.423261   12656 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 01:44:00.423991   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-589700 ).state
	I0229 01:44:02.422122   12656 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 01:44:02.422122   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:44:02.422122   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-589700 ).networkadapters[0]).ipaddresses[0]
	I0229 01:44:04.810075   12656 main.go:141] libmachine: [stdout =====>] : 172.19.11.152
	
	I0229 01:44:04.810132   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:44:04.810702   12656 sshutil.go:53] new ssh client: &{IP:172.19.11.152 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ingress-addon-legacy-589700\id_rsa Username:docker}
	I0229 01:44:04.926063   12656 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5020263s)
	I0229 01:44:04.935857   12656 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 01:44:04.942772   12656 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 01:44:04.942772   12656 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0229 01:44:04.943403   12656 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0229 01:44:04.944075   12656 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem -> 33122.pem in /etc/ssl/certs
	I0229 01:44:04.944075   12656 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem -> /etc/ssl/certs/33122.pem
	I0229 01:44:04.955363   12656 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 01:44:04.973393   12656 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem --> /etc/ssl/certs/33122.pem (1708 bytes)
	I0229 01:44:05.023592   12656 start.go:303] post-start completed in 4.6102583s
	I0229 01:44:05.026313   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-589700 ).state
	I0229 01:44:07.047771   12656 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 01:44:07.047771   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:44:07.048120   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-589700 ).networkadapters[0]).ipaddresses[0]
	I0229 01:44:09.453084   12656 main.go:141] libmachine: [stdout =====>] : 172.19.11.152
	
	I0229 01:44:09.453129   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:44:09.453188   12656 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-589700\config.json ...
	I0229 01:44:09.455568   12656 start.go:128] duration metric: createHost completed in 1m52.9003669s
	I0229 01:44:09.455637   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-589700 ).state
	I0229 01:44:11.446695   12656 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 01:44:11.447302   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:44:11.447302   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-589700 ).networkadapters[0]).ipaddresses[0]
	I0229 01:44:13.832353   12656 main.go:141] libmachine: [stdout =====>] : 172.19.11.152
	
	I0229 01:44:13.832353   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:44:13.837475   12656 main.go:141] libmachine: Using SSH client type: native
	I0229 01:44:13.837949   12656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.11.152 22 <nil> <nil>}
	I0229 01:44:13.837949   12656 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0229 01:44:13.981069   12656 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709171054.150894875
	
	I0229 01:44:13.981069   12656 fix.go:206] guest clock: 1709171054.150894875
	I0229 01:44:13.981069   12656 fix.go:219] Guest: 2024-02-29 01:44:14.150894875 +0000 UTC Remote: 2024-02-29 01:44:09.4555681 +0000 UTC m=+123.812928701 (delta=4.695326775s)
	I0229 01:44:13.981069   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-589700 ).state
	I0229 01:44:15.977584   12656 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 01:44:15.977584   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:44:15.977950   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-589700 ).networkadapters[0]).ipaddresses[0]
	I0229 01:44:18.425176   12656 main.go:141] libmachine: [stdout =====>] : 172.19.11.152
	
	I0229 01:44:18.425458   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:44:18.430068   12656 main.go:141] libmachine: Using SSH client type: native
	I0229 01:44:18.430473   12656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.11.152 22 <nil> <nil>}
	I0229 01:44:18.430543   12656 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709171053
	I0229 01:44:18.588345   12656 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Feb 29 01:44:13 UTC 2024
	
	I0229 01:44:18.588345   12656 fix.go:226] clock set: Thu Feb 29 01:44:13 UTC 2024
	 (err=<nil>)
	I0229 01:44:18.588345   12656 start.go:83] releasing machines lock for "ingress-addon-legacy-589700", held for 2m2.0335122s
	I0229 01:44:18.588961   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-589700 ).state
	I0229 01:44:20.602912   12656 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 01:44:20.604042   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:44:20.604107   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-589700 ).networkadapters[0]).ipaddresses[0]
	I0229 01:44:23.006440   12656 main.go:141] libmachine: [stdout =====>] : 172.19.11.152
	
	I0229 01:44:23.006440   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:44:23.010804   12656 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 01:44:23.011005   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-589700 ).state
	I0229 01:44:23.019053   12656 ssh_runner.go:195] Run: cat /version.json
	I0229 01:44:23.019686   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-589700 ).state
	I0229 01:44:25.008334   12656 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 01:44:25.008334   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:44:25.008735   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-589700 ).networkadapters[0]).ipaddresses[0]
	I0229 01:44:25.035035   12656 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 01:44:25.035210   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:44:25.035210   12656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-589700 ).networkadapters[0]).ipaddresses[0]
	I0229 01:44:27.472577   12656 main.go:141] libmachine: [stdout =====>] : 172.19.11.152
	
	I0229 01:44:27.472577   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:44:27.472577   12656 sshutil.go:53] new ssh client: &{IP:172.19.11.152 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ingress-addon-legacy-589700\id_rsa Username:docker}
	I0229 01:44:27.497297   12656 main.go:141] libmachine: [stdout =====>] : 172.19.11.152
	
	I0229 01:44:27.497840   12656 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:44:27.498230   12656 sshutil.go:53] new ssh client: &{IP:172.19.11.152 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ingress-addon-legacy-589700\id_rsa Username:docker}
	I0229 01:44:27.574756   12656 ssh_runner.go:235] Completed: cat /version.json: (4.5549163s)
	I0229 01:44:27.585490   12656 ssh_runner.go:195] Run: systemctl --version
	I0229 01:44:27.696134   12656 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.6842928s)
	I0229 01:44:27.705066   12656 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 01:44:27.714938   12656 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 01:44:27.723793   12656 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0229 01:44:27.750895   12656 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0229 01:44:27.778771   12656 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 01:44:27.778771   12656 start.go:475] detecting cgroup driver to use...
	I0229 01:44:27.778771   12656 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 01:44:27.827707   12656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0229 01:44:27.867355   12656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 01:44:27.891682   12656 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 01:44:27.907556   12656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 01:44:27.937509   12656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 01:44:27.966733   12656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 01:44:27.997960   12656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 01:44:28.025808   12656 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 01:44:28.053301   12656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 01:44:28.082455   12656 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 01:44:28.110937   12656 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 01:44:28.138495   12656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 01:44:28.337808   12656 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 01:44:28.370633   12656 start.go:475] detecting cgroup driver to use...
	I0229 01:44:28.380550   12656 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 01:44:28.423027   12656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 01:44:28.454084   12656 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 01:44:28.503311   12656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 01:44:28.540429   12656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 01:44:28.576330   12656 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 01:44:28.627252   12656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 01:44:28.653274   12656 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 01:44:28.706941   12656 ssh_runner.go:195] Run: which cri-dockerd
	I0229 01:44:28.724416   12656 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 01:44:28.742930   12656 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0229 01:44:28.788468   12656 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 01:44:28.989952   12656 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 01:44:29.185667   12656 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 01:44:29.185667   12656 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 01:44:29.233851   12656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 01:44:29.429413   12656 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 01:44:30.944218   12656 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5147209s)
	I0229 01:44:30.954556   12656 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 01:44:30.996753   12656 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 01:44:31.033828   12656 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.7 ...
	I0229 01:44:31.033917   12656 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0229 01:44:31.037705   12656 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0229 01:44:31.037774   12656 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0229 01:44:31.037774   12656 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0229 01:44:31.037774   12656 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:a6:a3:c1 Flags:up|broadcast|multicast|running}
	I0229 01:44:31.040332   12656 ip.go:210] interface addr: fe80::fc78:4865:5cac:d448/64
	I0229 01:44:31.040332   12656 ip.go:210] interface addr: 172.19.0.1/20
	I0229 01:44:31.048186   12656 ssh_runner.go:195] Run: grep 172.19.0.1	host.minikube.internal$ /etc/hosts
	I0229 01:44:31.055187   12656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 01:44:31.077747   12656 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0229 01:44:31.086944   12656 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 01:44:31.121781   12656 docker.go:685] Got preloaded images: 
	I0229 01:44:31.121781   12656 docker.go:691] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0229 01:44:31.132411   12656 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0229 01:44:31.162833   12656 ssh_runner.go:195] Run: which lz4
	I0229 01:44:31.169847   12656 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0229 01:44:31.179184   12656 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0229 01:44:31.186560   12656 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 01:44:31.186650   12656 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (424164442 bytes)
	I0229 01:44:33.269177   12656 docker.go:649] Took 2.098855 seconds to copy over tarball
	I0229 01:44:33.279104   12656 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 01:44:39.925678   12656 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (6.6462041s)
	I0229 01:44:39.925802   12656 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 01:44:40.000044   12656 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0229 01:44:40.021849   12656 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I0229 01:44:40.068846   12656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 01:44:40.267321   12656 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 01:44:45.805476   12656 ssh_runner.go:235] Completed: sudo systemctl restart docker: (5.5369234s)
	I0229 01:44:45.812483   12656 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 01:44:45.841995   12656 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0229 01:44:45.842037   12656 docker.go:691] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0229 01:44:45.842088   12656 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 01:44:45.861324   12656 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0229 01:44:45.868347   12656 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 01:44:45.868347   12656 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0229 01:44:45.872921   12656 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0229 01:44:45.874379   12656 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0229 01:44:45.875284   12656 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0229 01:44:45.877718   12656 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0229 01:44:45.881972   12656 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0229 01:44:45.884503   12656 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0229 01:44:45.885194   12656 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0229 01:44:45.891949   12656 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0229 01:44:45.892959   12656 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0229 01:44:45.892959   12656 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0229 01:44:45.893958   12656 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 01:44:45.897940   12656 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0229 01:44:45.899927   12656 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	W0229 01:44:46.021838   12656 image.go:187] authn lookup for registry.k8s.io/pause:3.2 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0229 01:44:46.098647   12656 image.go:187] authn lookup for registry.k8s.io/kube-controller-manager:v1.18.20 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0229 01:44:46.176305   12656 image.go:187] authn lookup for registry.k8s.io/etcd:3.4.3-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0229 01:44:46.253030   12656 image.go:187] authn lookup for registry.k8s.io/coredns:1.6.7 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0229 01:44:46.329900   12656 image.go:187] authn lookup for registry.k8s.io/kube-proxy:v1.18.20 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0229 01:44:46.408735   12656 image.go:187] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0229 01:44:46.486033   12656 image.go:187] authn lookup for registry.k8s.io/kube-apiserver:v1.18.20 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0229 01:44:46.563573   12656 image.go:187] authn lookup for registry.k8s.io/kube-scheduler:v1.18.20 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0229 01:44:46.632009   12656 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0229 01:44:46.635703   12656 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0229 01:44:46.646689   12656 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0229 01:44:46.671257   12656 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0229 01:44:46.671257   12656 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.18.20 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.18.20
	I0229 01:44:46.671257   12656 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0229 01:44:46.671257   12656 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0229 01:44:46.671257   12656 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.2 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.2
	I0229 01:44:46.671257   12656 docker.go:337] Removing image: registry.k8s.io/pause:3.2
	I0229 01:44:46.672235   12656 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0229 01:44:46.679022   12656 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0229 01:44:46.680132   12656 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0229 01:44:46.680829   12656 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I0229 01:44:46.693856   12656 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0229 01:44:46.698422   12656 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0229 01:44:46.698422   12656 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.18.20 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.18.20
	I0229 01:44:46.698422   12656 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0229 01:44:46.705261   12656 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I0229 01:44:46.719105   12656 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0229 01:44:46.719105   12656 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns:1.6.7 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.7
	I0229 01:44:46.719105   12656 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.7
	I0229 01:44:46.727348   12656 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 01:44:46.729982   12656 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I0229 01:44:46.776167   12656 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0229 01:44:46.778635   12656 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0229 01:44:46.778684   12656 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.18.20
	I0229 01:44:46.778684   12656 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.4.3-0 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0
	I0229 01:44:46.778737   12656 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.2
	I0229 01:44:46.778737   12656 docker.go:337] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0229 01:44:46.778926   12656 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0229 01:44:46.778926   12656 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.18.20 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.18.20
	I0229 01:44:46.778926   12656 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0229 01:44:46.787103   12656 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.18.20
	I0229 01:44:46.787806   12656 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0229 01:44:46.789200   12656 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I0229 01:44:46.816426   12656 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.7
	I0229 01:44:46.846280   12656 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0229 01:44:46.846335   12656 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.18.20 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.18.20
	I0229 01:44:46.846373   12656 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0229 01:44:46.854290   12656 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0229 01:44:46.862294   12656 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.18.20
	I0229 01:44:46.862294   12656 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0
	I0229 01:44:46.881006   12656 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.18.20
	I0229 01:44:46.882001   12656 cache_images.go:92] LoadImages completed in 1.0398123s
	W0229 01:44:46.882001   12656 out.go:239] X Unable to load cached images: loading cached images: CreateFile C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.18.20: The system cannot find the path specified.
	X Unable to load cached images: loading cached images: CreateFile C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.18.20: The system cannot find the path specified.
	I0229 01:44:46.890019   12656 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0229 01:44:46.928766   12656 cni.go:84] Creating CNI manager for ""
	I0229 01:44:46.928978   12656 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0229 01:44:46.929023   12656 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 01:44:46.929023   12656 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.11.152 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-589700 NodeName:ingress-addon-legacy-589700 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.11.152"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.11.152 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0229 01:44:46.929150   12656 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.11.152
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-589700"
	  kubeletExtraArgs:
	    node-ip: 172.19.11.152
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.11.152"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 01:44:46.929150   12656 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-589700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.11.152
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-589700 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 01:44:46.942838   12656 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0229 01:44:46.962777   12656 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 01:44:46.970842   12656 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 01:44:46.991616   12656 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (355 bytes)
	I0229 01:44:47.023871   12656 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0229 01:44:47.059534   12656 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2127 bytes)
	I0229 01:44:47.102440   12656 ssh_runner.go:195] Run: grep 172.19.11.152	control-plane.minikube.internal$ /etc/hosts
	I0229 01:44:47.109523   12656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.11.152	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 01:44:47.130113   12656 certs.go:56] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-589700 for IP: 172.19.11.152
	I0229 01:44:47.130113   12656 certs.go:190] acquiring lock for shared ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:44:47.142195   12656 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0229 01:44:47.159382   12656 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0229 01:44:47.159807   12656 certs.go:319] generating minikube-user signed cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-589700\client.key
	I0229 01:44:47.160418   12656 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-589700\client.crt with IP's: []
	I0229 01:44:47.478386   12656 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-589700\client.crt ...
	I0229 01:44:47.478386   12656 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-589700\client.crt: {Name:mkca55b34ddb0cf637f5c2e2f6179eb491c67170 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:44:47.479435   12656 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-589700\client.key ...
	I0229 01:44:47.479435   12656 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-589700\client.key: {Name:mka483c45abee61f3a60c35e0fb72563ebc4dd6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:44:47.480595   12656 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-589700\apiserver.key.f4d9bc23
	I0229 01:44:47.481537   12656 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-589700\apiserver.crt.f4d9bc23 with IP's: [172.19.11.152 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 01:44:47.868422   12656 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-589700\apiserver.crt.f4d9bc23 ...
	I0229 01:44:47.868422   12656 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-589700\apiserver.crt.f4d9bc23: {Name:mkf282eb76dfdf678900a778735e0b09924f47fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:44:47.869641   12656 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-589700\apiserver.key.f4d9bc23 ...
	I0229 01:44:47.869641   12656 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-589700\apiserver.key.f4d9bc23: {Name:mk4cd9abc9ed55701970ea25830973f75790d8c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:44:47.870630   12656 certs.go:337] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-589700\apiserver.crt.f4d9bc23 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-589700\apiserver.crt
	I0229 01:44:47.882633   12656 certs.go:341] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-589700\apiserver.key.f4d9bc23 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-589700\apiserver.key
	I0229 01:44:47.883801   12656 certs.go:319] generating aggregator signed cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-589700\proxy-client.key
	I0229 01:44:47.884400   12656 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-589700\proxy-client.crt with IP's: []
	I0229 01:44:48.246076   12656 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-589700\proxy-client.crt ...
	I0229 01:44:48.246076   12656 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-589700\proxy-client.crt: {Name:mk1df3c0d0c94f7a39ea30d061a49f2de79411a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:44:48.247296   12656 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-589700\proxy-client.key ...
	I0229 01:44:48.247296   12656 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-589700\proxy-client.key: {Name:mkfa2d6b1fbbff3147682c3f669bddf7333a7352 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:44:48.247878   12656 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-589700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0229 01:44:48.248717   12656 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-589700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0229 01:44:48.248717   12656 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-589700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0229 01:44:48.260396   12656 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-589700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0229 01:44:48.261404   12656 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0229 01:44:48.261404   12656 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0229 01:44:48.261404   12656 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0229 01:44:48.261404   12656 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0229 01:44:48.262069   12656 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3312.pem (1338 bytes)
	W0229 01:44:48.262778   12656 certs.go:433] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3312_empty.pem, impossibly tiny 0 bytes
	I0229 01:44:48.262778   12656 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0229 01:44:48.263446   12656 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0229 01:44:48.263446   12656 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0229 01:44:48.263446   12656 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0229 01:44:48.264139   12656 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem (1708 bytes)
	I0229 01:44:48.264332   12656 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:44:48.264332   12656 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3312.pem -> /usr/share/ca-certificates/3312.pem
	I0229 01:44:48.264332   12656 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem -> /usr/share/ca-certificates/33122.pem
	I0229 01:44:48.265008   12656 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-589700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 01:44:48.314044   12656 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-589700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 01:44:48.361444   12656 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-589700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 01:44:48.410833   12656 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-589700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0229 01:44:48.461995   12656 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 01:44:48.515036   12656 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 01:44:48.568244   12656 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 01:44:48.616125   12656 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 01:44:48.662650   12656 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 01:44:48.712494   12656 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3312.pem --> /usr/share/ca-certificates/3312.pem (1338 bytes)
	I0229 01:44:48.757478   12656 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem --> /usr/share/ca-certificates/33122.pem (1708 bytes)
	I0229 01:44:48.810196   12656 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 01:44:48.855006   12656 ssh_runner.go:195] Run: openssl version
	I0229 01:44:48.873735   12656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 01:44:48.903522   12656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:44:48.910201   12656 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 00:45 /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:44:48.919363   12656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:44:48.937266   12656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 01:44:48.967759   12656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3312.pem && ln -fs /usr/share/ca-certificates/3312.pem /etc/ssl/certs/3312.pem"
	I0229 01:44:48.999004   12656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3312.pem
	I0229 01:44:49.007026   12656 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 00:59 /usr/share/ca-certificates/3312.pem
	I0229 01:44:49.016434   12656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3312.pem
	I0229 01:44:49.036740   12656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3312.pem /etc/ssl/certs/51391683.0"
	I0229 01:44:49.069995   12656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/33122.pem && ln -fs /usr/share/ca-certificates/33122.pem /etc/ssl/certs/33122.pem"
	I0229 01:44:49.100786   12656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/33122.pem
	I0229 01:44:49.109115   12656 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 00:59 /usr/share/ca-certificates/33122.pem
	I0229 01:44:49.119815   12656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/33122.pem
	I0229 01:44:49.138052   12656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/33122.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 01:44:49.168264   12656 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 01:44:49.174243   12656 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 01:44:49.175256   12656 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-589700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-589700 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.19.11.152 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 01:44:49.182255   12656 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 01:44:49.216390   12656 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 01:44:49.246857   12656 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 01:44:49.274807   12656 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 01:44:49.294422   12656 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 01:44:49.294422   12656 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 01:44:49.380092   12656 kubeadm.go:322] W0229 01:44:49.551397    1581 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0229 01:44:49.487649   12656 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0229 01:44:49.543213   12656 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
	I0229 01:44:49.672892   12656 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 01:44:54.209436   12656 kubeadm.go:322] W0229 01:44:54.382152    1581 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0229 01:44:54.211133   12656 kubeadm.go:322] W0229 01:44:54.383842    1581 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0229 01:46:49.222179   12656 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 01:46:49.224106   12656 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 01:46:49.228666   12656 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0229 01:46:49.228666   12656 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 01:46:49.229281   12656 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 01:46:49.229281   12656 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 01:46:49.229281   12656 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 01:46:49.229849   12656 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 01:46:49.229964   12656 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 01:46:49.229964   12656 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 01:46:49.229964   12656 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 01:46:49.230782   12656 out.go:204]   - Generating certificates and keys ...
	I0229 01:46:49.230782   12656 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 01:46:49.230782   12656 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 01:46:49.230782   12656 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 01:46:49.231472   12656 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0229 01:46:49.231472   12656 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0229 01:46:49.231472   12656 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0229 01:46:49.231472   12656 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0229 01:46:49.232217   12656 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-589700 localhost] and IPs [172.19.11.152 127.0.0.1 ::1]
	I0229 01:46:49.232375   12656 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0229 01:46:49.232637   12656 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-589700 localhost] and IPs [172.19.11.152 127.0.0.1 ::1]
	I0229 01:46:49.232833   12656 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 01:46:49.232902   12656 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 01:46:49.233065   12656 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0229 01:46:49.233065   12656 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 01:46:49.233065   12656 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 01:46:49.233065   12656 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 01:46:49.233065   12656 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 01:46:49.233591   12656 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 01:46:49.233770   12656 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 01:46:49.234245   12656 out.go:204]   - Booting up control plane ...
	I0229 01:46:49.234408   12656 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 01:46:49.234521   12656 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 01:46:49.234521   12656 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 01:46:49.234521   12656 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 01:46:49.235323   12656 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 01:46:49.235323   12656 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 01:46:49.235323   12656 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:46:49.236002   12656 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:46:49.236084   12656 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:46:49.236576   12656 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:46:49.236757   12656 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:46:49.236757   12656 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:46:49.236757   12656 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:46:49.237505   12656 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:46:49.237532   12656 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:46:49.237532   12656 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:46:49.237532   12656 kubeadm.go:322] 
	I0229 01:46:49.238125   12656 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0229 01:46:49.238125   12656 kubeadm.go:322] 		timed out waiting for the condition
	I0229 01:46:49.238125   12656 kubeadm.go:322] 
	I0229 01:46:49.238125   12656 kubeadm.go:322] 	This error is likely caused by:
	I0229 01:46:49.238125   12656 kubeadm.go:322] 		- The kubelet is not running
	I0229 01:46:49.238125   12656 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 01:46:49.238125   12656 kubeadm.go:322] 
	I0229 01:46:49.238774   12656 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 01:46:49.238774   12656 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0229 01:46:49.238774   12656 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0229 01:46:49.238774   12656 kubeadm.go:322] 
	I0229 01:46:49.238774   12656 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 01:46:49.239426   12656 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0229 01:46:49.239426   12656 kubeadm.go:322] 
	I0229 01:46:49.239524   12656 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0229 01:46:49.239524   12656 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0229 01:46:49.239524   12656 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0229 01:46:49.239524   12656 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0229 01:46:49.239524   12656 kubeadm.go:322] 
	W0229 01:46:49.240159   12656 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-589700 localhost] and IPs [172.19.11.152 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-589700 localhost] and IPs [172.19.11.152 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0229 01:44:49.551397    1581 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 01:44:54.382152    1581 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 01:44:54.383842    1581 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-589700 localhost] and IPs [172.19.11.152 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-589700 localhost] and IPs [172.19.11.152 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0229 01:44:49.551397    1581 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 01:44:54.382152    1581 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 01:44:54.383842    1581 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0229 01:46:49.240344   12656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0229 01:46:49.848779   12656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 01:46:49.887829   12656 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 01:46:49.907395   12656 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 01:46:49.907539   12656 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 01:46:49.986110   12656 kubeadm.go:322] W0229 01:46:50.158293   20249 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0229 01:46:50.098266   12656 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0229 01:46:50.140507   12656 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
	I0229 01:46:50.256644   12656 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 01:46:51.949211   12656 kubeadm.go:322] W0229 01:46:52.120691   20249 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0229 01:46:51.960126   12656 kubeadm.go:322] W0229 01:46:52.131667   20249 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0229 01:48:46.968878   12656 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 01:48:46.969190   12656 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 01:48:46.970731   12656 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0229 01:48:46.970977   12656 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 01:48:46.970977   12656 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 01:48:46.971503   12656 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 01:48:46.971642   12656 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 01:48:46.971642   12656 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 01:48:46.972213   12656 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 01:48:46.972283   12656 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 01:48:46.972498   12656 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 01:48:46.973597   12656 out.go:204]   - Generating certificates and keys ...
	I0229 01:48:46.973961   12656 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 01:48:46.974260   12656 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 01:48:46.974400   12656 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 01:48:46.974634   12656 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 01:48:46.974841   12656 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 01:48:46.975056   12656 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 01:48:46.975268   12656 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 01:48:46.975414   12656 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 01:48:46.975593   12656 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 01:48:46.975723   12656 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 01:48:46.975824   12656 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 01:48:46.975920   12656 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 01:48:46.976067   12656 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 01:48:46.976138   12656 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 01:48:46.976281   12656 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 01:48:46.976425   12656 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 01:48:46.976610   12656 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 01:48:46.976976   12656 out.go:204]   - Booting up control plane ...
	I0229 01:48:46.976976   12656 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 01:48:46.977520   12656 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 01:48:46.977647   12656 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 01:48:46.977932   12656 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 01:48:46.977995   12656 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 01:48:46.977995   12656 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 01:48:46.978541   12656 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:48:46.978864   12656 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:48:46.979050   12656 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:48:46.979647   12656 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:48:46.979957   12656 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:48:46.980016   12656 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:48:46.980016   12656 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:48:46.980618   12656 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:48:46.980618   12656 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:48:46.981160   12656 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:48:46.981282   12656 kubeadm.go:322] 
	I0229 01:48:46.981340   12656 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0229 01:48:46.981460   12656 kubeadm.go:322] 		timed out waiting for the condition
	I0229 01:48:46.981520   12656 kubeadm.go:322] 
	I0229 01:48:46.981577   12656 kubeadm.go:322] 	This error is likely caused by:
	I0229 01:48:46.981640   12656 kubeadm.go:322] 		- The kubelet is not running
	I0229 01:48:46.982008   12656 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 01:48:46.982092   12656 kubeadm.go:322] 
	I0229 01:48:46.982562   12656 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 01:48:46.982679   12656 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0229 01:48:46.982792   12656 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0229 01:48:46.982850   12656 kubeadm.go:322] 
	I0229 01:48:46.983083   12656 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 01:48:46.983258   12656 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0229 01:48:46.983258   12656 kubeadm.go:322] 
	I0229 01:48:46.983258   12656 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0229 01:48:46.983258   12656 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0229 01:48:46.983258   12656 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0229 01:48:46.983258   12656 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0229 01:48:46.983258   12656 kubeadm.go:322] 
	I0229 01:48:46.983258   12656 kubeadm.go:406] StartCluster complete in 3m57.7947881s
	I0229 01:48:46.994482   12656 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:48:47.027409   12656 logs.go:276] 0 containers: []
	W0229 01:48:47.027409   12656 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:48:47.038267   12656 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:48:47.074482   12656 logs.go:276] 0 containers: []
	W0229 01:48:47.074482   12656 logs.go:278] No container was found matching "etcd"
	I0229 01:48:47.084930   12656 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:48:47.125029   12656 logs.go:276] 0 containers: []
	W0229 01:48:47.125103   12656 logs.go:278] No container was found matching "coredns"
	I0229 01:48:47.133055   12656 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:48:47.173662   12656 logs.go:276] 0 containers: []
	W0229 01:48:47.173695   12656 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:48:47.181104   12656 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:48:47.215662   12656 logs.go:276] 0 containers: []
	W0229 01:48:47.215737   12656 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:48:47.225246   12656 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:48:47.249882   12656 logs.go:276] 0 containers: []
	W0229 01:48:47.249882   12656 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:48:47.258287   12656 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:48:47.286131   12656 logs.go:276] 0 containers: []
	W0229 01:48:47.286131   12656 logs.go:278] No container was found matching "kindnet"
	I0229 01:48:47.286131   12656 logs.go:123] Gathering logs for container status ...
	I0229 01:48:47.286131   12656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:48:47.422529   12656 logs.go:123] Gathering logs for kubelet ...
	I0229 01:48:47.422638   12656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 01:48:47.471279   12656 logs.go:138] Found kubelet problem: Feb 29 01:48:40 ingress-addon-legacy-589700 kubelet[37392]: F0229 01:48:40.140244   37392 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	W0229 01:48:47.480611   12656 logs.go:138] Found kubelet problem: Feb 29 01:48:41 ingress-addon-legacy-589700 kubelet[37588]: F0229 01:48:41.355076   37588 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	W0229 01:48:47.488103   12656 logs.go:138] Found kubelet problem: Feb 29 01:48:42 ingress-addon-legacy-589700 kubelet[37781]: F0229 01:48:42.589380   37781 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	W0229 01:48:47.497240   12656 logs.go:138] Found kubelet problem: Feb 29 01:48:43 ingress-addon-legacy-589700 kubelet[37973]: F0229 01:48:43.852335   37973 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	W0229 01:48:47.505166   12656 logs.go:138] Found kubelet problem: Feb 29 01:48:45 ingress-addon-legacy-589700 kubelet[38160]: F0229 01:48:45.068659   38160 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	W0229 01:48:47.513189   12656 logs.go:138] Found kubelet problem: Feb 29 01:48:46 ingress-addon-legacy-589700 kubelet[38362]: F0229 01:48:46.404519   38362 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	I0229 01:48:47.522697   12656 logs.go:123] Gathering logs for dmesg ...
	I0229 01:48:47.522697   12656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:48:47.551728   12656 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:48:47.551794   12656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:48:47.651400   12656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:48:47.651446   12656 logs.go:123] Gathering logs for Docker ...
	I0229 01:48:47.651500   12656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W0229 01:48:47.713490   12656 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0229 01:46:50.158293   20249 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 01:46:52.120691   20249 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 01:46:52.131667   20249 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 01:48:47.713490   12656 out.go:239] * 
	* 
	W0229 01:48:47.713490   12656 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0229 01:46:50.158293   20249 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 01:46:52.120691   20249 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 01:46:52.131667   20249 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0229 01:46:50.158293   20249 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 01:46:52.120691   20249 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 01:46:52.131667   20249 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 01:48:47.713490   12656 out.go:239] * 
	* 
	W0229 01:48:47.714635   12656 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 01:48:47.715773   12656 out.go:177] X Problems detected in kubelet:
	I0229 01:48:47.716350   12656 out.go:177]   Feb 29 01:48:40 ingress-addon-legacy-589700 kubelet[37392]: F0229 01:48:40.140244   37392 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	I0229 01:48:47.717121   12656 out.go:177]   Feb 29 01:48:41 ingress-addon-legacy-589700 kubelet[37588]: F0229 01:48:41.355076   37588 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	I0229 01:48:47.717864   12656 out.go:177]   Feb 29 01:48:42 ingress-addon-legacy-589700 kubelet[37781]: F0229 01:48:42.589380   37781 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	I0229 01:48:47.720672   12656 out.go:177] 
	W0229 01:48:47.721238   12656 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0229 01:46:50.158293   20249 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 01:46:52.120691   20249 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 01:46:52.131667   20249 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0229 01:46:50.158293   20249 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 01:46:52.120691   20249 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 01:46:52.131667   20249 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 01:48:47.721238   12656 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 01:48:47.721238   12656 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 01:48:47.722486   12656 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p ingress-addon-legacy-589700 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperv" : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (402.47s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (131.76s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-589700 addons enable ingress --alsologtostderr -v=5
E0229 01:49:28.675971    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ingress-addon-legacy-589700 addons enable ingress --alsologtostderr -v=5: exit status 10 (2m0.3858837s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 01:48:48.233147    4632 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0229 01:48:48.290308    4632 out.go:291] Setting OutFile to fd 760 ...
	I0229 01:48:48.302296    4632 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:48:48.303316    4632 out.go:304] Setting ErrFile to fd 1076...
	I0229 01:48:48.303316    4632 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:48:48.318352    4632 mustload.go:65] Loading cluster: ingress-addon-legacy-589700
	I0229 01:48:48.319092    4632 config.go:182] Loaded profile config "ingress-addon-legacy-589700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0229 01:48:48.319092    4632 addons.go:597] checking whether the cluster is paused
	I0229 01:48:48.319399    4632 config.go:182] Loaded profile config "ingress-addon-legacy-589700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0229 01:48:48.319497    4632 host.go:66] Checking if "ingress-addon-legacy-589700" exists ...
	I0229 01:48:48.320273    4632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-589700 ).state
	I0229 01:48:50.470405    4632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 01:48:50.470405    4632 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:48:50.479129    4632 ssh_runner.go:195] Run: systemctl --version
	I0229 01:48:50.480144    4632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-589700 ).state
	I0229 01:48:52.490566    4632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 01:48:52.491147    4632 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:48:52.491147    4632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-589700 ).networkadapters[0]).ipaddresses[0]
	I0229 01:48:54.917523    4632 main.go:141] libmachine: [stdout =====>] : 172.19.11.152
	
	I0229 01:48:54.917523    4632 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:48:54.918858    4632 sshutil.go:53] new ssh client: &{IP:172.19.11.152 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ingress-addon-legacy-589700\id_rsa Username:docker}
	I0229 01:48:55.021804    4632 ssh_runner.go:235] Completed: systemctl --version: (4.5424231s)
	I0229 01:48:55.029623    4632 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 01:48:55.062168    4632 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0229 01:48:55.063616    4632 config.go:182] Loaded profile config "ingress-addon-legacy-589700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0229 01:48:55.063682    4632 addons.go:69] Setting ingress=true in profile "ingress-addon-legacy-589700"
	I0229 01:48:55.063682    4632 addons.go:234] Setting addon ingress=true in "ingress-addon-legacy-589700"
	I0229 01:48:55.063818    4632 host.go:66] Checking if "ingress-addon-legacy-589700" exists ...
	I0229 01:48:55.064563    4632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-589700 ).state
	I0229 01:48:57.081392    4632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 01:48:57.082347    4632 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:48:57.083195    4632 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	I0229 01:48:57.083911    4632 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0229 01:48:57.084594    4632 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0229 01:48:57.085257    4632 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0229 01:48:57.085776    4632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15618 bytes)
	I0229 01:48:57.085776    4632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-589700 ).state
	I0229 01:48:59.132468    4632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 01:48:59.133542    4632 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:48:59.133616    4632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-589700 ).networkadapters[0]).ipaddresses[0]
	I0229 01:49:01.533044    4632 main.go:141] libmachine: [stdout =====>] : 172.19.11.152
	
	I0229 01:49:01.533044    4632 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:49:01.533044    4632 sshutil.go:53] new ssh client: &{IP:172.19.11.152 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ingress-addon-legacy-589700\id_rsa Username:docker}
	I0229 01:49:01.662818    4632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:49:01.766239    4632 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:49:01.766647    4632 retry.go:31] will retry after 164.65673ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:49:01.946191    4632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:49:02.049594    4632 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:49:02.049594    4632 retry.go:31] will retry after 538.697778ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:49:02.607101    4632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:49:02.710132    4632 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:49:02.710213    4632 retry.go:31] will retry after 445.747699ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:49:03.173452    4632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:49:03.262613    4632 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:49:03.262685    4632 retry.go:31] will retry after 744.141111ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:49:04.025640    4632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:49:04.158537    4632 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:49:04.158601    4632 retry.go:31] will retry after 1.833258316s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:49:06.004968    4632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:49:06.093403    4632 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:49:06.093486    4632 retry.go:31] will retry after 1.100989761s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:49:07.213777    4632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:49:07.305139    4632 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:49:07.305206    4632 retry.go:31] will retry after 3.834109571s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:49:11.167540    4632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:49:11.293917    4632 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:49:11.293917    4632 retry.go:31] will retry after 4.844976035s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:49:16.163235    4632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:49:16.297779    4632 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:49:16.297899    4632 retry.go:31] will retry after 5.23868504s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:49:21.561247    4632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:49:21.703760    4632 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:49:21.703818    4632 retry.go:31] will retry after 5.817681096s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:49:27.547200    4632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:49:27.674604    4632 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:49:27.674679    4632 retry.go:31] will retry after 17.200994689s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:49:44.890131    4632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:49:44.986124    4632 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:49:44.986281    4632 retry.go:31] will retry after 25.825077739s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:50:10.826160    4632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:50:10.919364    4632 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:50:10.919364    4632 retry.go:31] will retry after 37.380121649s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:50:48.311642    4632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:50:48.467150    4632 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:50:48.467244    4632 addons.go:470] Verifying addon ingress=true in "ingress-addon-legacy-589700"
	I0229 01:50:48.467838    4632 out.go:177] * Verifying ingress addon...
	I0229 01:50:48.470164    4632 out.go:177] 
	W0229 01:50:48.470861    4632 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-589700" does not exist: client config: context "ingress-addon-legacy-589700" does not exist]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-589700" does not exist: client config: context "ingress-addon-legacy-589700" does not exist]
	W0229 01:50:48.470861    4632 out.go:239] * 
	* 
	W0229 01:50:48.477103    4632 out.go:239] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube_addons_2eb5e4e15e556888b35a5aefe6dc4c93587c1b36_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube_addons_2eb5e4e15e556888b35a5aefe6dc4c93587c1b36_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 01:50:48.478097    4632 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ingress-addon-legacy-589700 -n ingress-addon-legacy-589700
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p ingress-addon-legacy-589700 -n ingress-addon-legacy-589700: exit status 6 (11.3695583s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 01:50:48.620002    1684 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0229 01:50:59.815488    1684 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-589700" does not appear in C:\Users\jenkins.minikube5\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-589700" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (131.76s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (77.19s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-589700 addons enable ingress-dns --alsologtostderr -v=5
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ingress-addon-legacy-589700 addons enable ingress-dns --alsologtostderr -v=5: exit status 1 (1m5.7859561s)

                                                
                                                
-- stdout --
	* ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 01:50:59.988200   13100 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0229 01:51:00.045750   13100 out.go:291] Setting OutFile to fd 1240 ...
	I0229 01:51:00.059511   13100 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:51:00.059511   13100 out.go:304] Setting ErrFile to fd 1244...
	I0229 01:51:00.059511   13100 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:51:00.072676   13100 mustload.go:65] Loading cluster: ingress-addon-legacy-589700
	I0229 01:51:00.073350   13100 config.go:182] Loaded profile config "ingress-addon-legacy-589700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0229 01:51:00.073350   13100 addons.go:597] checking whether the cluster is paused
	I0229 01:51:00.073508   13100 config.go:182] Loaded profile config "ingress-addon-legacy-589700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0229 01:51:00.073508   13100 host.go:66] Checking if "ingress-addon-legacy-589700" exists ...
	I0229 01:51:00.074158   13100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-589700 ).state
	I0229 01:51:02.057067   13100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 01:51:02.057067   13100 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:51:02.066963   13100 ssh_runner.go:195] Run: systemctl --version
	I0229 01:51:02.066963   13100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-589700 ).state
	I0229 01:51:04.086332   13100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 01:51:04.086332   13100 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:51:04.086430   13100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-589700 ).networkadapters[0]).ipaddresses[0]
	I0229 01:51:06.503957   13100 main.go:141] libmachine: [stdout =====>] : 172.19.11.152
	
	I0229 01:51:06.504792   13100 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:51:06.505223   13100 sshutil.go:53] new ssh client: &{IP:172.19.11.152 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ingress-addon-legacy-589700\id_rsa Username:docker}
	I0229 01:51:06.609044   13100 ssh_runner.go:235] Completed: systemctl --version: (4.5418283s)
	I0229 01:51:06.616534   13100 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 01:51:06.641554   13100 out.go:177] * ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0229 01:51:06.642782   13100 config.go:182] Loaded profile config "ingress-addon-legacy-589700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0229 01:51:06.642859   13100 addons.go:69] Setting ingress-dns=true in profile "ingress-addon-legacy-589700"
	I0229 01:51:06.642859   13100 addons.go:234] Setting addon ingress-dns=true in "ingress-addon-legacy-589700"
	I0229 01:51:06.642989   13100 host.go:66] Checking if "ingress-addon-legacy-589700" exists ...
	I0229 01:51:06.643933   13100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-589700 ).state
	I0229 01:51:08.679077   13100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 01:51:08.679077   13100 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:51:08.679984   13100 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I0229 01:51:08.680695   13100 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0229 01:51:08.680695   13100 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I0229 01:51:08.680695   13100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-589700 ).state
	I0229 01:51:10.711646   13100 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 01:51:10.711646   13100 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:51:10.711646   13100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-589700 ).networkadapters[0]).ipaddresses[0]
	I0229 01:51:13.097063   13100 main.go:141] libmachine: [stdout =====>] : 172.19.11.152
	
	I0229 01:51:13.097063   13100 main.go:141] libmachine: [stderr =====>] : 
	I0229 01:51:13.097562   13100 sshutil.go:53] new ssh client: &{IP:172.19.11.152 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ingress-addon-legacy-589700\id_rsa Username:docker}
	I0229 01:51:13.244069   13100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:51:13.364646   13100 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:51:13.364755   13100 retry.go:31] will retry after 354.601163ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:51:13.740610   13100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:51:13.840397   13100 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:51:13.840397   13100 retry.go:31] will retry after 297.905983ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:51:14.152278   13100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:51:14.245256   13100 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:51:14.245413   13100 retry.go:31] will retry after 743.035677ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:51:15.019675   13100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:51:15.154723   13100 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:51:15.154723   13100 retry.go:31] will retry after 833.47671ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:51:16.008985   13100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:51:16.122173   13100 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:51:16.122173   13100 retry.go:31] will retry after 978.782534ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:51:17.125666   13100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:51:17.251993   13100 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:51:17.252860   13100 retry.go:31] will retry after 1.9980004s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:51:19.269562   13100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:51:19.363302   13100 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:51:19.363302   13100 retry.go:31] will retry after 1.613399757s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:51:20.999600   13100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:51:21.135251   13100 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:51:21.135355   13100 retry.go:31] will retry after 4.949295081s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:51:26.095208   13100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:51:26.200184   13100 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:51:26.200265   13100 retry.go:31] will retry after 3.428897411s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:51:29.642749   13100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:51:29.748157   13100 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:51:29.748306   13100 retry.go:31] will retry after 8.595161621s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:51:38.353534   13100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:51:38.459619   13100 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:51:38.459821   13100 retry.go:31] will retry after 15.709678508s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:51:54.184437   13100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:51:54.279586   13100 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:51:54.279586   13100 retry.go:31] will retry after 26.450324856s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ingress-addon-legacy-589700 -n ingress-addon-legacy-589700
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p ingress-addon-legacy-589700 -n ingress-addon-legacy-589700: exit status 6 (11.3977647s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 01:52:05.779667    8260 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0229 01:52:16.991771    8260 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-589700" does not appear in C:\Users\jenkins.minikube5\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-589700" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (77.19s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (54.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-314500 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-314500 -- exec busybox-5b5d89c9d6-826w2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-314500 -- exec busybox-5b5d89c9d6-826w2 -- sh -c "ping -c 1 172.19.0.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-314500 -- exec busybox-5b5d89c9d6-826w2 -- sh -c "ping -c 1 172.19.0.1": exit status 1 (10.4283899s)

                                                
                                                
-- stdout --
	PING 172.19.0.1 (172.19.0.1): 56 data bytes
	
	--- 172.19.0.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 02:19:43.231304    8716 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (172.19.0.1) from pod (busybox-5b5d89c9d6-826w2): exit status 1
multinode_test.go:588: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-314500 -- exec busybox-5b5d89c9d6-qcblm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-314500 -- exec busybox-5b5d89c9d6-qcblm -- sh -c "ping -c 1 172.19.0.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-314500 -- exec busybox-5b5d89c9d6-qcblm -- sh -c "ping -c 1 172.19.0.1": exit status 1 (10.4427577s)

                                                
                                                
-- stdout --
	PING 172.19.0.1 (172.19.0.1): 56 data bytes
	
	--- 172.19.0.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 02:19:54.138449    8800 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (172.19.0.1) from pod (busybox-5b5d89c9d6-qcblm): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-314500 -n multinode-314500
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-314500 -n multinode-314500: (11.386817s)
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-314500 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-314500 logs -n 25: (7.9493883s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-141600 ssh -- ls                    | mount-start-2-141600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:09 UTC | 29 Feb 24 02:09 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-1-141600                           | mount-start-1-141600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:09 UTC | 29 Feb 24 02:10 UTC |
	|         | --alsologtostderr -v=5                            |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-141600 ssh -- ls                    | mount-start-2-141600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:10 UTC | 29 Feb 24 02:10 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| stop    | -p mount-start-2-141600                           | mount-start-2-141600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:10 UTC | 29 Feb 24 02:10 UTC |
	| start   | -p mount-start-2-141600                           | mount-start-2-141600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:10 UTC | 29 Feb 24 02:12 UTC |
	| mount   | C:\Users\jenkins.minikube5:/minikube-host         | mount-start-2-141600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:12 UTC |                     |
	|         | --profile mount-start-2-141600 --v 0              |                      |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid         |                      |                   |         |                     |                     |
	|         |                                                 0 |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-141600 ssh -- ls                    | mount-start-2-141600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:12 UTC | 29 Feb 24 02:12 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-2-141600                           | mount-start-2-141600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:12 UTC | 29 Feb 24 02:12 UTC |
	| delete  | -p mount-start-1-141600                           | mount-start-1-141600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:12 UTC | 29 Feb 24 02:12 UTC |
	| start   | -p multinode-314500                               | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:13 UTC | 29 Feb 24 02:19 UTC |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- apply -f                   | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- rollout                    | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- get pods -o                | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- get pods -o                | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- exec                       | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | busybox-5b5d89c9d6-826w2 --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- exec                       | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | busybox-5b5d89c9d6-qcblm --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- exec                       | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | busybox-5b5d89c9d6-826w2 --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- exec                       | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | busybox-5b5d89c9d6-qcblm --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- exec                       | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | busybox-5b5d89c9d6-826w2 -- nslookup              |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- exec                       | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | busybox-5b5d89c9d6-qcblm -- nslookup              |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- get pods -o                | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- exec                       | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | busybox-5b5d89c9d6-826w2                          |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- exec                       | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC |                     |
	|         | busybox-5b5d89c9d6-826w2 -- sh                    |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.19.0.1                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- exec                       | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | busybox-5b5d89c9d6-qcblm                          |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- exec                       | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC |                     |
	|         | busybox-5b5d89c9d6-qcblm -- sh                    |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.19.0.1                           |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 02:13:00
	Running on machine: minikube5
	Binary: Built with gc go1.22.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 02:13:00.149906    8584 out.go:291] Setting OutFile to fd 1312 ...
	I0229 02:13:00.150227    8584 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:13:00.150227    8584 out.go:304] Setting ErrFile to fd 1328...
	I0229 02:13:00.150227    8584 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:13:00.171700    8584 out.go:298] Setting JSON to false
	I0229 02:13:00.175741    8584 start.go:129] hostinfo: {"hostname":"minikube5","uptime":269007,"bootTime":1708903773,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0229 02:13:00.175741    8584 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 02:13:00.177046    8584 out.go:177] * [multinode-314500] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0229 02:13:00.177046    8584 notify.go:220] Checking for updates...
	I0229 02:13:00.178097    8584 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 02:13:00.178485    8584 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 02:13:00.178485    8584 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0229 02:13:00.179850    8584 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 02:13:00.180273    8584 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 02:13:00.181791    8584 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 02:13:05.205228    8584 out.go:177] * Using the hyperv driver based on user configuration
	I0229 02:13:05.206271    8584 start.go:299] selected driver: hyperv
	I0229 02:13:05.206271    8584 start.go:903] validating driver "hyperv" against <nil>
	I0229 02:13:05.206359    8584 start.go:914] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 02:13:05.251841    8584 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 02:13:05.252685    8584 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 02:13:05.252685    8584 cni.go:84] Creating CNI manager for ""
	I0229 02:13:05.252685    8584 cni.go:136] 0 nodes found, recommending kindnet
	I0229 02:13:05.252685    8584 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0229 02:13:05.252685    8584 start_flags.go:323] config:
	{Name:multinode-314500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-314500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:13:05.253940    8584 iso.go:125] acquiring lock: {Name:mk91f2ee29fbed5605669750e8cfa308a1229357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:13:05.255538    8584 out.go:177] * Starting control plane node multinode-314500 in cluster multinode-314500
	I0229 02:13:05.256114    8584 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 02:13:05.256302    8584 preload.go:148] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0229 02:13:05.256344    8584 cache.go:56] Caching tarball of preloaded images
	I0229 02:13:05.256572    8584 preload.go:174] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 02:13:05.256572    8584 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0229 02:13:05.257361    8584 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\config.json ...
	I0229 02:13:05.257455    8584 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\config.json: {Name:mkd3169e69638735699adbb2ff8489bce372cb2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:13:05.258503    8584 start.go:365] acquiring machines lock for multinode-314500: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 02:13:05.258691    8584 start.go:369] acquired machines lock for "multinode-314500" in 152µs
	I0229 02:13:05.258871    8584 start.go:93] Provisioning new machine with config: &{Name:multinode-314500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-314500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 02:13:05.258976    8584 start.go:125] createHost starting for "" (driver="hyperv")
	I0229 02:13:05.259751    8584 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0229 02:13:05.259891    8584 start.go:159] libmachine.API.Create for "multinode-314500" (driver="hyperv")
	I0229 02:13:05.259891    8584 client.go:168] LocalClient.Create starting
	I0229 02:13:05.260497    8584 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0229 02:13:05.260497    8584 main.go:141] libmachine: Decoding PEM data...
	I0229 02:13:05.260497    8584 main.go:141] libmachine: Parsing certificate...
	I0229 02:13:05.260497    8584 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0229 02:13:05.261186    8584 main.go:141] libmachine: Decoding PEM data...
	I0229 02:13:05.261186    8584 main.go:141] libmachine: Parsing certificate...
	I0229 02:13:05.261186    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0229 02:13:07.286347    8584 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0229 02:13:07.286422    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:07.286509    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0229 02:13:08.976234    8584 main.go:141] libmachine: [stdout =====>] : False
	
	I0229 02:13:08.976234    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:08.976234    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0229 02:13:10.405564    8584 main.go:141] libmachine: [stdout =====>] : True
	
	I0229 02:13:10.405718    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:10.405718    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0229 02:13:13.896897    8584 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0229 02:13:13.896976    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:13.899798    8584 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0229 02:13:14.290871    8584 main.go:141] libmachine: Creating SSH key...
	I0229 02:13:14.527065    8584 main.go:141] libmachine: Creating VM...
	I0229 02:13:14.527065    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0229 02:13:17.265891    8584 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0229 02:13:17.266097    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:17.266097    8584 main.go:141] libmachine: Using switch "Default Switch"
	I0229 02:13:17.266238    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0229 02:13:18.963078    8584 main.go:141] libmachine: [stdout =====>] : True
	
	I0229 02:13:18.963078    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:18.963078    8584 main.go:141] libmachine: Creating VHD
	I0229 02:13:18.964222    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\fixed.vhd' -SizeBytes 10MB -Fixed
	I0229 02:13:22.594784    8584 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 884B5862-3469-4CFD-B182-8E081E737039
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0229 02:13:22.594784    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:22.594784    8584 main.go:141] libmachine: Writing magic tar header
	I0229 02:13:22.594784    8584 main.go:141] libmachine: Writing SSH key tar header
	I0229 02:13:22.604709    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\disk.vhd' -VHDType Dynamic -DeleteSource
	I0229 02:13:25.650762    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:13:25.650762    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:25.650762    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\disk.vhd' -SizeBytes 20000MB
	I0229 02:13:28.088594    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:13:28.088773    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:28.088918    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-314500 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0229 02:13:31.464130    8584 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-314500 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0229 02:13:31.464130    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:31.464846    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-314500 -DynamicMemoryEnabled $false
	I0229 02:13:33.602734    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:13:33.602734    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:33.602734    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-314500 -Count 2
	I0229 02:13:35.681481    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:13:35.682414    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:35.682502    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-314500 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\boot2docker.iso'
	I0229 02:13:38.162637    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:13:38.162637    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:38.163401    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-314500 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\disk.vhd'
	I0229 02:13:40.645938    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:13:40.646015    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:40.646015    8584 main.go:141] libmachine: Starting VM...
	I0229 02:13:40.646015    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-314500
	I0229 02:13:43.355580    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:13:43.355580    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:43.355580    8584 main.go:141] libmachine: Waiting for host to start...
	I0229 02:13:43.355580    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:13:45.477300    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:13:45.477397    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:45.477397    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:13:47.817639    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:13:47.817639    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:48.829666    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:13:50.912195    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:13:50.912241    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:50.912370    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:13:53.314227    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:13:53.314300    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:54.326584    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:13:56.402395    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:13:56.403080    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:56.403237    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:13:58.748206    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:13:58.748429    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:59.750928    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:14:01.825704    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:14:01.825704    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:01.826435    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:14:04.171500    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:14:04.171557    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:05.181274    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:14:07.245329    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:14:07.245623    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:07.245781    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:14:09.720669    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:14:09.720669    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:09.721021    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:14:11.754505    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:14:11.755426    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:11.755426    8584 machine.go:88] provisioning docker machine ...
	I0229 02:14:11.755516    8584 buildroot.go:166] provisioning hostname "multinode-314500"
	I0229 02:14:11.755562    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:14:13.804208    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:14:13.804208    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:13.804335    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:14:16.247231    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:14:16.248239    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:16.254331    8584 main.go:141] libmachine: Using SSH client type: native
	I0229 02:14:16.267585    8584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.2.165 22 <nil> <nil>}
	I0229 02:14:16.267585    8584 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-314500 && echo "multinode-314500" | sudo tee /etc/hostname
	I0229 02:14:16.424392    8584 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-314500
	
	I0229 02:14:16.424516    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:14:18.448299    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:14:18.448299    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:18.448830    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:14:20.858056    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:14:20.858056    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:20.863979    8584 main.go:141] libmachine: Using SSH client type: native
	I0229 02:14:20.864174    8584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.2.165 22 <nil> <nil>}
	I0229 02:14:20.864174    8584 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-314500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-314500/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-314500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:14:21.010675    8584 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:14:21.010763    8584 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0229 02:14:21.010763    8584 buildroot.go:174] setting up certificates
	I0229 02:14:21.010852    8584 provision.go:83] configureAuth start
	I0229 02:14:21.011112    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:14:22.998181    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:14:22.998447    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:22.998552    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:14:25.432573    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:14:25.432573    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:25.433124    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:14:27.425883    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:14:27.426494    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:27.426494    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:14:29.833478    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:14:29.833478    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:29.833478    8584 provision.go:138] copyHostCerts
	I0229 02:14:29.834264    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0229 02:14:29.834264    8584 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0229 02:14:29.834264    8584 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0229 02:14:29.834791    8584 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0229 02:14:29.835948    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0229 02:14:29.836088    8584 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0229 02:14:29.836088    8584 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0229 02:14:29.836088    8584 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0229 02:14:29.837182    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0229 02:14:29.837305    8584 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0229 02:14:29.837396    8584 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0229 02:14:29.837627    8584 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1675 bytes)
	I0229 02:14:29.838481    8584 provision.go:112] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-314500 san=[172.19.2.165 172.19.2.165 localhost 127.0.0.1 minikube multinode-314500]
	I0229 02:14:29.990342    8584 provision.go:172] copyRemoteCerts
	I0229 02:14:29.998349    8584 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:14:29.999347    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:14:32.015676    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:14:32.015676    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:32.016407    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:14:34.434860    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:14:34.435751    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:34.435751    8584 sshutil.go:53] new ssh client: &{IP:172.19.2.165 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\id_rsa Username:docker}
	I0229 02:14:34.540272    8584 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5416689s)
	I0229 02:14:34.540378    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0229 02:14:34.540655    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 02:14:34.589037    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0229 02:14:34.589037    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1224 bytes)
	I0229 02:14:34.637988    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0229 02:14:34.638288    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 02:14:34.684997    8584 provision.go:86] duration metric: configureAuth took 13.6732738s
	I0229 02:14:34.684997    8584 buildroot.go:189] setting minikube options for container-runtime
	I0229 02:14:34.685957    8584 config.go:182] Loaded profile config "multinode-314500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 02:14:34.685957    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:14:36.732569    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:14:36.732569    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:36.732893    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:14:39.171929    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:14:39.171986    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:39.176641    8584 main.go:141] libmachine: Using SSH client type: native
	I0229 02:14:39.177166    8584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.2.165 22 <nil> <nil>}
	I0229 02:14:39.177237    8584 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 02:14:39.296794    8584 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0229 02:14:39.296888    8584 buildroot.go:70] root file system type: tmpfs
	I0229 02:14:39.296957    8584 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 02:14:39.296957    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:14:41.315910    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:14:41.315910    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:41.315910    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:14:43.719853    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:14:43.720852    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:43.725258    8584 main.go:141] libmachine: Using SSH client type: native
	I0229 02:14:43.725666    8584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.2.165 22 <nil> <nil>}
	I0229 02:14:43.725666    8584 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 02:14:43.881883    8584 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 02:14:43.882199    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:14:45.916519    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:14:45.916519    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:45.917559    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:14:48.351202    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:14:48.351586    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:48.356595    8584 main.go:141] libmachine: Using SSH client type: native
	I0229 02:14:48.356668    8584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.2.165 22 <nil> <nil>}
	I0229 02:14:48.356668    8584 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 02:14:49.392262    8584 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0229 02:14:49.392262    8584 machine.go:91] provisioned docker machine in 37.6347323s
	I0229 02:14:49.392262    8584 client.go:171] LocalClient.Create took 1m44.1265457s
	I0229 02:14:49.392262    8584 start.go:167] duration metric: libmachine.API.Create for "multinode-314500" took 1m44.1265457s
	I0229 02:14:49.392262    8584 start.go:300] post-start starting for "multinode-314500" (driver="hyperv")
	I0229 02:14:49.393258    8584 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:14:49.402259    8584 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:14:49.402259    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:14:51.395389    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:14:51.395616    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:51.395690    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:14:53.788270    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:14:53.788752    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:53.789362    8584 sshutil.go:53] new ssh client: &{IP:172.19.2.165 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\id_rsa Username:docker}
	I0229 02:14:53.893141    8584 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.490524s)
	I0229 02:14:53.905375    8584 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:14:53.912851    8584 command_runner.go:130] > NAME=Buildroot
	I0229 02:14:53.912851    8584 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0229 02:14:53.912851    8584 command_runner.go:130] > ID=buildroot
	I0229 02:14:53.912851    8584 command_runner.go:130] > VERSION_ID=2023.02.9
	I0229 02:14:53.912851    8584 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0229 02:14:53.912851    8584 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 02:14:53.912851    8584 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0229 02:14:53.913631    8584 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0229 02:14:53.914277    8584 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem -> 33122.pem in /etc/ssl/certs
	I0229 02:14:53.914277    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem -> /etc/ssl/certs/33122.pem
	I0229 02:14:53.923918    8584 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 02:14:53.943567    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem --> /etc/ssl/certs/33122.pem (1708 bytes)
	I0229 02:14:53.989666    8584 start.go:303] post-start completed in 4.5952349s
	I0229 02:14:53.991784    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:14:55.999148    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:14:55.999350    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:55.999350    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:14:58.385355    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:14:58.385355    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:58.385948    8584 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\config.json ...
	I0229 02:14:58.389663    8584 start.go:128] duration metric: createHost completed in 1m53.1242572s
	I0229 02:14:58.389764    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:15:00.365905    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:15:00.365905    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:15:00.365905    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:15:02.777961    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:15:02.777961    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:15:02.782646    8584 main.go:141] libmachine: Using SSH client type: native
	I0229 02:15:02.783280    8584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.2.165 22 <nil> <nil>}
	I0229 02:15:02.783280    8584 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 02:15:02.899664    8584 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709172903.069532857
	
	I0229 02:15:02.899664    8584 fix.go:206] guest clock: 1709172903.069532857
	I0229 02:15:02.899664    8584 fix.go:219] Guest: 2024-02-29 02:15:03.069532857 +0000 UTC Remote: 2024-02-29 02:14:58.3896639 +0000 UTC m=+118.373915301 (delta=4.679868957s)
	I0229 02:15:02.899873    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:15:04.946764    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:15:04.946764    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:15:04.946764    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:15:07.386956    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:15:07.386956    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:15:07.391193    8584 main.go:141] libmachine: Using SSH client type: native
	I0229 02:15:07.391193    8584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.2.165 22 <nil> <nil>}
	I0229 02:15:07.391193    8584 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709172902
	I0229 02:15:07.538124    8584 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Feb 29 02:15:02 UTC 2024
	
	I0229 02:15:07.538124    8584 fix.go:226] clock set: Thu Feb 29 02:15:02 UTC 2024
	 (err=<nil>)
	I0229 02:15:07.538124    8584 start.go:83] releasing machines lock for "multinode-314500", held for 2m2.2725929s
	I0229 02:15:07.538124    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:15:09.578277    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:15:09.578277    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:15:09.578477    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:15:12.017474    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:15:12.017474    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:15:12.020803    8584 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:15:12.020938    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:15:12.028085    8584 ssh_runner.go:195] Run: cat /version.json
	I0229 02:15:12.028085    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:15:14.106976    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:15:14.106976    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:15:14.107962    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:15:14.108048    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:15:14.108166    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:15:14.108210    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:15:16.599162    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:15:16.599162    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:15:16.599717    8584 sshutil.go:53] new ssh client: &{IP:172.19.2.165 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\id_rsa Username:docker}
	I0229 02:15:16.624118    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:15:16.624199    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:15:16.624505    8584 sshutil.go:53] new ssh client: &{IP:172.19.2.165 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\id_rsa Username:docker}
	I0229 02:15:16.878087    8584 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0229 02:15:16.878258    8584 command_runner.go:130] > {"iso_version": "v1.32.1-1708638130-18020", "kicbase_version": "v0.0.42-1708008208-17936", "minikube_version": "v1.32.0", "commit": "d80143d2abd5a004b09b48bbc118a104326900af"}
	I0229 02:15:16.878258    8584 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.8570973s)
	I0229 02:15:16.878258    8584 ssh_runner.go:235] Completed: cat /version.json: (4.8499018s)
	I0229 02:15:16.891953    8584 ssh_runner.go:195] Run: systemctl --version
	I0229 02:15:16.901191    8584 command_runner.go:130] > systemd 252 (252)
	I0229 02:15:16.901288    8584 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0229 02:15:16.911194    8584 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0229 02:15:16.920182    8584 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0229 02:15:16.920182    8584 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 02:15:16.929614    8584 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:15:16.958720    8584 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0229 02:15:16.958791    8584 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 02:15:16.958791    8584 start.go:475] detecting cgroup driver to use...
	I0229 02:15:16.958791    8584 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:15:16.993577    8584 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0229 02:15:17.006166    8584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0229 02:15:17.036528    8584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 02:15:17.056400    8584 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 02:15:17.066084    8584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 02:15:17.094368    8584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 02:15:17.125650    8584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 02:15:17.155407    8584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 02:15:17.184091    8584 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:15:17.211981    8584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 02:15:17.240589    8584 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:15:17.258992    8584 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0229 02:15:17.271051    8584 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:15:17.301079    8584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:15:17.510984    8584 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 02:15:17.540848    8584 start.go:475] detecting cgroup driver to use...
	I0229 02:15:17.549602    8584 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 02:15:17.574482    8584 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0229 02:15:17.574482    8584 command_runner.go:130] > [Unit]
	I0229 02:15:17.574482    8584 command_runner.go:130] > Description=Docker Application Container Engine
	I0229 02:15:17.574482    8584 command_runner.go:130] > Documentation=https://docs.docker.com
	I0229 02:15:17.574482    8584 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0229 02:15:17.574482    8584 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0229 02:15:17.574482    8584 command_runner.go:130] > StartLimitBurst=3
	I0229 02:15:17.574482    8584 command_runner.go:130] > StartLimitIntervalSec=60
	I0229 02:15:17.574482    8584 command_runner.go:130] > [Service]
	I0229 02:15:17.574482    8584 command_runner.go:130] > Type=notify
	I0229 02:15:17.574482    8584 command_runner.go:130] > Restart=on-failure
	I0229 02:15:17.574482    8584 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0229 02:15:17.574482    8584 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0229 02:15:17.574482    8584 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0229 02:15:17.574482    8584 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0229 02:15:17.574482    8584 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0229 02:15:17.574482    8584 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0229 02:15:17.574482    8584 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0229 02:15:17.574482    8584 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0229 02:15:17.574482    8584 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0229 02:15:17.574482    8584 command_runner.go:130] > ExecStart=
	I0229 02:15:17.574482    8584 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0229 02:15:17.574482    8584 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0229 02:15:17.574482    8584 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0229 02:15:17.574482    8584 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0229 02:15:17.574482    8584 command_runner.go:130] > LimitNOFILE=infinity
	I0229 02:15:17.574482    8584 command_runner.go:130] > LimitNPROC=infinity
	I0229 02:15:17.574482    8584 command_runner.go:130] > LimitCORE=infinity
	I0229 02:15:17.574482    8584 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0229 02:15:17.574482    8584 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0229 02:15:17.574482    8584 command_runner.go:130] > TasksMax=infinity
	I0229 02:15:17.574482    8584 command_runner.go:130] > TimeoutStartSec=0
	I0229 02:15:17.574482    8584 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0229 02:15:17.574482    8584 command_runner.go:130] > Delegate=yes
	I0229 02:15:17.574482    8584 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0229 02:15:17.574482    8584 command_runner.go:130] > KillMode=process
	I0229 02:15:17.574482    8584 command_runner.go:130] > [Install]
	I0229 02:15:17.574482    8584 command_runner.go:130] > WantedBy=multi-user.target
	I0229 02:15:17.584629    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:15:17.616355    8584 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 02:15:17.657950    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:15:17.693651    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 02:15:17.729096    8584 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 02:15:17.784099    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 02:15:17.808125    8584 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:15:17.842233    8584 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0229 02:15:17.851465    8584 ssh_runner.go:195] Run: which cri-dockerd
	I0229 02:15:17.862101    8584 command_runner.go:130] > /usr/bin/cri-dockerd
	I0229 02:15:17.871161    8584 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 02:15:17.889692    8584 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0229 02:15:17.933551    8584 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 02:15:18.134287    8584 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 02:15:18.310331    8584 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 02:15:18.310331    8584 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 02:15:18.357955    8584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:15:18.552365    8584 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 02:15:20.070091    8584 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5176409s)
	I0229 02:15:20.081202    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0229 02:15:20.122115    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 02:15:20.159070    8584 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0229 02:15:20.360745    8584 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0229 02:15:20.562103    8584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:15:20.747807    8584 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0229 02:15:20.790021    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 02:15:20.823798    8584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:15:21.024568    8584 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0229 02:15:21.124460    8584 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0229 02:15:21.138536    8584 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0229 02:15:21.147715    8584 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0229 02:15:21.147715    8584 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0229 02:15:21.147715    8584 command_runner.go:130] > Device: 0,22	Inode: 889         Links: 1
	I0229 02:15:21.147715    8584 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0229 02:15:21.147715    8584 command_runner.go:130] > Access: 2024-02-29 02:15:21.219763442 +0000
	I0229 02:15:21.147715    8584 command_runner.go:130] > Modify: 2024-02-29 02:15:21.219763442 +0000
	I0229 02:15:21.147715    8584 command_runner.go:130] > Change: 2024-02-29 02:15:21.223763631 +0000
	I0229 02:15:21.147715    8584 command_runner.go:130] >  Birth: -
	I0229 02:15:21.147715    8584 start.go:543] Will wait 60s for crictl version
	I0229 02:15:21.160607    8584 ssh_runner.go:195] Run: which crictl
	I0229 02:15:21.166613    8584 command_runner.go:130] > /usr/bin/crictl
	I0229 02:15:21.175685    8584 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 02:15:21.243995    8584 command_runner.go:130] > Version:  0.1.0
	I0229 02:15:21.244098    8584 command_runner.go:130] > RuntimeName:  docker
	I0229 02:15:21.244098    8584 command_runner.go:130] > RuntimeVersion:  24.0.7
	I0229 02:15:21.244098    8584 command_runner.go:130] > RuntimeApiVersion:  v1
	I0229 02:15:21.244098    8584 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0229 02:15:21.252876    8584 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 02:15:21.284945    8584 command_runner.go:130] > 24.0.7
	I0229 02:15:21.293857    8584 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 02:15:21.328569    8584 command_runner.go:130] > 24.0.7
	I0229 02:15:21.329772    8584 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0229 02:15:21.329981    8584 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0229 02:15:21.335723    8584 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0229 02:15:21.335723    8584 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0229 02:15:21.335723    8584 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0229 02:15:21.335830    8584 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:a6:a3:c1 Flags:up|broadcast|multicast|running}
	I0229 02:15:21.339030    8584 ip.go:210] interface addr: fe80::fc78:4865:5cac:d448/64
	I0229 02:15:21.339030    8584 ip.go:210] interface addr: 172.19.0.1/20
	I0229 02:15:21.346674    8584 ssh_runner.go:195] Run: grep 172.19.0.1	host.minikube.internal$ /etc/hosts
	I0229 02:15:21.352657    8584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:15:21.374301    8584 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 02:15:21.380708    8584 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 02:15:21.407908    8584 docker.go:685] Got preloaded images: 
	I0229 02:15:21.407908    8584 docker.go:691] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I0229 02:15:21.417190    8584 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0229 02:15:21.434433    8584 command_runner.go:139] > {"Repositories":{}}
	I0229 02:15:21.444446    8584 ssh_runner.go:195] Run: which lz4
	I0229 02:15:21.452611    8584 command_runner.go:130] > /usr/bin/lz4
	I0229 02:15:21.453860    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0229 02:15:21.463263    8584 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 02:15:21.469865    8584 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 02:15:21.470175    8584 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 02:15:21.470424    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I0229 02:15:23.210150    8584 docker.go:649] Took 1.755758 seconds to copy over tarball
	I0229 02:15:23.222182    8584 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 02:15:33.289701    8584 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (10.0669568s)
	I0229 02:15:33.289701    8584 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 02:15:33.357787    8584 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0229 02:15:33.376545    8584 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.10.1":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.9-0":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3":"sha256:73deb9a3f702532592a4167455f8
bf2e5f5d900bcc959ba2fd2d35c321de1af9"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.28.4":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.28.4":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.28.4":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021
a3a2899304398e"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.28.4":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0229 02:15:33.376717    8584 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0229 02:15:33.419432    8584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:15:33.617988    8584 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 02:15:35.620810    8584 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.0027096s)
	I0229 02:15:35.628068    8584 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 02:15:35.653067    8584 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0229 02:15:35.653067    8584 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0229 02:15:35.653067    8584 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0229 02:15:35.653067    8584 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0229 02:15:35.653067    8584 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0229 02:15:35.653067    8584 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0229 02:15:35.653067    8584 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0229 02:15:35.653067    8584 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:15:35.654344    8584 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0229 02:15:35.654416    8584 cache_images.go:84] Images are preloaded, skipping loading
	I0229 02:15:35.664071    8584 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0229 02:15:35.699171    8584 command_runner.go:130] > cgroupfs
	I0229 02:15:35.700391    8584 cni.go:84] Creating CNI manager for ""
	I0229 02:15:35.700684    8584 cni.go:136] 1 nodes found, recommending kindnet
	I0229 02:15:35.700684    8584 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 02:15:35.700770    8584 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.2.165 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-314500 NodeName:multinode-314500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.2.165"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.2.165 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 02:15:35.701130    8584 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.2.165
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-314500"
	  kubeletExtraArgs:
	    node-ip: 172.19.2.165
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.2.165"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 02:15:35.701263    8584 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-314500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.2.165
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-314500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 02:15:35.711763    8584 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 02:15:35.728898    8584 command_runner.go:130] > kubeadm
	I0229 02:15:35.728898    8584 command_runner.go:130] > kubectl
	I0229 02:15:35.728898    8584 command_runner.go:130] > kubelet
	I0229 02:15:35.728898    8584 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 02:15:35.737884    8584 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 02:15:35.754466    8584 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I0229 02:15:35.786652    8584 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 02:15:35.818096    8584 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0229 02:15:35.860377    8584 ssh_runner.go:195] Run: grep 172.19.2.165	control-plane.minikube.internal$ /etc/hosts
	I0229 02:15:35.867122    8584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.2.165	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:15:35.887430    8584 certs.go:56] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500 for IP: 172.19.2.165
	I0229 02:15:35.887430    8584 certs.go:190] acquiring lock for shared ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:15:35.888418    8584 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0229 02:15:35.888418    8584 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0229 02:15:35.889416    8584 certs.go:319] generating minikube-user signed cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\client.key
	I0229 02:15:35.889416    8584 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\client.crt with IP's: []
	I0229 02:15:36.213588    8584 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\client.crt ...
	I0229 02:15:36.213588    8584 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\client.crt: {Name:mk73b75f20ca1d2e0bec389400db48fd623b8015 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:15:36.214068    8584 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\client.key ...
	I0229 02:15:36.214068    8584 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\client.key: {Name:mkb1b1a5bd39eef2e9536007ed8aa8f214199fbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:15:36.215219    8584 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.key.3d9898f0
	I0229 02:15:36.215219    8584 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.crt.3d9898f0 with IP's: [172.19.2.165 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 02:15:36.494396    8584 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.crt.3d9898f0 ...
	I0229 02:15:36.494396    8584 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.crt.3d9898f0: {Name:mk936caf0d565f97194ec84a769f367930fe715a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:15:36.495081    8584 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.key.3d9898f0 ...
	I0229 02:15:36.496079    8584 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.key.3d9898f0: {Name:mkafd075e8297f3e248df3102b52bd4b41170a1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:15:36.496315    8584 certs.go:337] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.crt.3d9898f0 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.crt
	I0229 02:15:36.510316    8584 certs.go:341] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.key.3d9898f0 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.key
	I0229 02:15:36.510683    8584 certs.go:319] generating aggregator signed cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\proxy-client.key
	I0229 02:15:36.510683    8584 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\proxy-client.crt with IP's: []
	I0229 02:15:36.721693    8584 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\proxy-client.crt ...
	I0229 02:15:36.721693    8584 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\proxy-client.crt: {Name:mkd74b50be0a408b84b859db2dc4cdc2614195ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:15:36.723948    8584 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\proxy-client.key ...
	I0229 02:15:36.724009    8584 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\proxy-client.key: {Name:mk76464224e14bc795ee483f0f2ecb96ca808e2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:15:36.724747    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0229 02:15:36.724747    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0229 02:15:36.725273    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0229 02:15:36.735647    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0229 02:15:36.736197    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0229 02:15:36.736248    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0229 02:15:36.736248    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0229 02:15:36.736248    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0229 02:15:36.737101    8584 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3312.pem (1338 bytes)
	W0229 02:15:36.737357    8584 certs.go:433] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3312_empty.pem, impossibly tiny 0 bytes
	I0229 02:15:36.737357    8584 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0229 02:15:36.737357    8584 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0229 02:15:36.737906    8584 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0229 02:15:36.738244    8584 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0229 02:15:36.738845    8584 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem (1708 bytes)
	I0229 02:15:36.739105    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem -> /usr/share/ca-certificates/33122.pem
	I0229 02:15:36.739320    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:15:36.739481    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3312.pem -> /usr/share/ca-certificates/3312.pem
	I0229 02:15:36.740148    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 02:15:36.786597    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 02:15:36.830608    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 02:15:36.875812    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0229 02:15:36.921431    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 02:15:36.966942    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 02:15:37.013401    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 02:15:37.059070    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 02:15:37.106455    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem --> /usr/share/ca-certificates/33122.pem (1708 bytes)
	I0229 02:15:37.156672    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 02:15:37.203394    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3312.pem --> /usr/share/ca-certificates/3312.pem (1338 bytes)
	I0229 02:15:37.251707    8584 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 02:15:37.295710    8584 ssh_runner.go:195] Run: openssl version
	I0229 02:15:37.305455    8584 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0229 02:15:37.316796    8584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/33122.pem && ln -fs /usr/share/ca-certificates/33122.pem /etc/ssl/certs/33122.pem"
	I0229 02:15:37.346166    8584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/33122.pem
	I0229 02:15:37.353171    8584 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 29 00:59 /usr/share/ca-certificates/33122.pem
	I0229 02:15:37.354028    8584 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 00:59 /usr/share/ca-certificates/33122.pem
	I0229 02:15:37.362846    8584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/33122.pem
	I0229 02:15:37.373491    8584 command_runner.go:130] > 3ec20f2e
	I0229 02:15:37.385486    8584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/33122.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 02:15:37.415489    8584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 02:15:37.444489    8584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:15:37.451960    8584 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 29 00:45 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:15:37.451960    8584 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 00:45 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:15:37.460116    8584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:15:37.469671    8584 command_runner.go:130] > b5213941
	I0229 02:15:37.480093    8584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 02:15:37.508112    8584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3312.pem && ln -fs /usr/share/ca-certificates/3312.pem /etc/ssl/certs/3312.pem"
	I0229 02:15:37.535081    8584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3312.pem
	I0229 02:15:37.542076    8584 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 29 00:59 /usr/share/ca-certificates/3312.pem
	I0229 02:15:37.542657    8584 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 00:59 /usr/share/ca-certificates/3312.pem
	I0229 02:15:37.552276    8584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3312.pem
	I0229 02:15:37.561453    8584 command_runner.go:130] > 51391683
	I0229 02:15:37.570468    8584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3312.pem /etc/ssl/certs/51391683.0"
	I0229 02:15:37.599088    8584 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 02:15:37.607208    8584 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 02:15:37.607208    8584 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 02:15:37.607627    8584 kubeadm.go:404] StartCluster: {Name:multinode-314500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.28.4 ClusterName:multinode-314500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.19.2.165 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:15:37.614406    8584 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 02:15:37.651041    8584 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 02:15:37.669431    8584 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0229 02:15:37.669431    8584 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0229 02:15:37.669431    8584 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0229 02:15:37.679297    8584 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:15:37.704096    8584 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:15:37.722096    8584 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0229 02:15:37.722096    8584 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0229 02:15:37.722096    8584 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0229 02:15:37.722096    8584 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:15:37.723135    8584 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:15:37.723135    8584 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0229 02:15:38.381888    8584 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:15:38.381962    8584 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:15:51.901148    8584 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0229 02:15:51.901148    8584 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I0229 02:15:51.901148    8584 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 02:15:51.901148    8584 command_runner.go:130] > [preflight] Running pre-flight checks
	I0229 02:15:51.901731    8584 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:15:51.901731    8584 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:15:51.901836    8584 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:15:51.901836    8584 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:15:51.902556    8584 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:15:51.902556    8584 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:15:51.902691    8584 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:15:51.902691    8584 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:15:51.903567    8584 out.go:204]   - Generating certificates and keys ...
	I0229 02:15:51.903626    8584 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 02:15:51.903626    8584 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0229 02:15:51.903626    8584 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 02:15:51.903626    8584 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0229 02:15:51.904297    8584 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 02:15:51.904297    8584 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 02:15:51.904297    8584 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0229 02:15:51.904297    8584 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0229 02:15:51.904297    8584 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0229 02:15:51.904297    8584 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0229 02:15:51.904906    8584 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0229 02:15:51.904937    8584 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0229 02:15:51.905063    8584 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0229 02:15:51.905063    8584 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0229 02:15:51.905063    8584 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-314500] and IPs [172.19.2.165 127.0.0.1 ::1]
	I0229 02:15:51.905063    8584 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-314500] and IPs [172.19.2.165 127.0.0.1 ::1]
	I0229 02:15:51.905063    8584 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0229 02:15:51.905595    8584 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0229 02:15:51.905775    8584 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-314500] and IPs [172.19.2.165 127.0.0.1 ::1]
	I0229 02:15:51.905775    8584 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-314500] and IPs [172.19.2.165 127.0.0.1 ::1]
	I0229 02:15:51.905775    8584 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 02:15:51.905775    8584 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 02:15:51.906311    8584 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 02:15:51.906311    8584 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 02:15:51.906451    8584 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0229 02:15:51.906451    8584 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0229 02:15:51.906648    8584 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:15:51.906648    8584 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:15:51.906648    8584 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:15:51.906648    8584 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:15:51.906648    8584 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:15:51.906648    8584 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:15:51.907239    8584 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:15:51.907322    8584 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:15:51.907444    8584 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:15:51.907444    8584 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:15:51.907639    8584 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:15:51.907639    8584 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:15:51.907772    8584 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:15:51.907840    8584 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:15:51.908342    8584 out.go:204]   - Booting up control plane ...
	I0229 02:15:51.908342    8584 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:15:51.908342    8584 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:15:51.908868    8584 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:15:51.908868    8584 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:15:51.908983    8584 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:15:51.909056    8584 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:15:51.909179    8584 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:15:51.909179    8584 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:15:51.909179    8584 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:15:51.909179    8584 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:15:51.909179    8584 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 02:15:51.909179    8584 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0229 02:15:51.909950    8584 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:15:51.909950    8584 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:15:51.910229    8584 command_runner.go:130] > [apiclient] All control plane components are healthy after 7.507183 seconds
	I0229 02:15:51.910229    8584 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.507183 seconds
	I0229 02:15:51.910438    8584 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0229 02:15:51.910552    8584 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0229 02:15:51.910616    8584 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0229 02:15:51.910616    8584 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0229 02:15:51.910616    8584 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0229 02:15:51.911258    8584 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0229 02:15:51.911797    8584 command_runner.go:130] > [mark-control-plane] Marking the node multinode-314500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0229 02:15:51.911912    8584 kubeadm.go:322] [mark-control-plane] Marking the node multinode-314500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0229 02:15:51.911912    8584 command_runner.go:130] > [bootstrap-token] Using token: 0hv5co.fj6ugwf787q3himr
	I0229 02:15:51.911912    8584 kubeadm.go:322] [bootstrap-token] Using token: 0hv5co.fj6ugwf787q3himr
	I0229 02:15:51.912545    8584 out.go:204]   - Configuring RBAC rules ...
	I0229 02:15:51.912545    8584 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0229 02:15:51.912545    8584 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0229 02:15:51.912545    8584 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0229 02:15:51.913096    8584 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0229 02:15:51.913282    8584 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0229 02:15:51.913282    8584 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0229 02:15:51.913282    8584 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0229 02:15:51.913282    8584 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0229 02:15:51.913282    8584 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0229 02:15:51.913282    8584 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0229 02:15:51.913282    8584 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0229 02:15:51.913282    8584 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0229 02:15:51.914161    8584 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0229 02:15:51.914161    8584 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0229 02:15:51.914161    8584 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0229 02:15:51.914161    8584 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0229 02:15:51.914161    8584 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0229 02:15:51.914161    8584 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0229 02:15:51.914161    8584 kubeadm.go:322] 
	I0229 02:15:51.914161    8584 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0229 02:15:51.914161    8584 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0229 02:15:51.914161    8584 kubeadm.go:322] 
	I0229 02:15:51.914161    8584 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0229 02:15:51.914161    8584 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0229 02:15:51.914161    8584 kubeadm.go:322] 
	I0229 02:15:51.914161    8584 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0229 02:15:51.914161    8584 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0229 02:15:51.914161    8584 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0229 02:15:51.914161    8584 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0229 02:15:51.914161    8584 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0229 02:15:51.915155    8584 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0229 02:15:51.915155    8584 kubeadm.go:322] 
	I0229 02:15:51.915155    8584 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0229 02:15:51.915155    8584 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0229 02:15:51.915155    8584 kubeadm.go:322] 
	I0229 02:15:51.915155    8584 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0229 02:15:51.915155    8584 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0229 02:15:51.915155    8584 kubeadm.go:322] 
	I0229 02:15:51.915155    8584 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0229 02:15:51.915155    8584 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0229 02:15:51.915155    8584 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0229 02:15:51.915155    8584 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0229 02:15:51.915155    8584 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0229 02:15:51.915155    8584 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0229 02:15:51.915155    8584 kubeadm.go:322] 
	I0229 02:15:51.915155    8584 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0229 02:15:51.915155    8584 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0229 02:15:51.915155    8584 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0229 02:15:51.915155    8584 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0229 02:15:51.916151    8584 kubeadm.go:322] 
	I0229 02:15:51.916151    8584 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 0hv5co.fj6ugwf787q3himr \
	I0229 02:15:51.916151    8584 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 0hv5co.fj6ugwf787q3himr \
	I0229 02:15:51.916151    8584 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:9c722bf1323b6c4442b9327af3863f0d7e41785d89e27c3b473d4929b028e022 \
	I0229 02:15:51.916151    8584 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:9c722bf1323b6c4442b9327af3863f0d7e41785d89e27c3b473d4929b028e022 \
	I0229 02:15:51.916151    8584 command_runner.go:130] > 	--control-plane 
	I0229 02:15:51.916151    8584 kubeadm.go:322] 	--control-plane 
	I0229 02:15:51.916151    8584 kubeadm.go:322] 
	I0229 02:15:51.916151    8584 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0229 02:15:51.916151    8584 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0229 02:15:51.916151    8584 kubeadm.go:322] 
	I0229 02:15:51.916151    8584 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 0hv5co.fj6ugwf787q3himr \
	I0229 02:15:51.916151    8584 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 0hv5co.fj6ugwf787q3himr \
	I0229 02:15:51.917165    8584 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:9c722bf1323b6c4442b9327af3863f0d7e41785d89e27c3b473d4929b028e022 
	I0229 02:15:51.917165    8584 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:9c722bf1323b6c4442b9327af3863f0d7e41785d89e27c3b473d4929b028e022 
	I0229 02:15:51.917165    8584 cni.go:84] Creating CNI manager for ""
	I0229 02:15:51.917165    8584 cni.go:136] 1 nodes found, recommending kindnet
	I0229 02:15:51.917165    8584 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0229 02:15:51.926742    8584 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0229 02:15:51.933753    8584 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0229 02:15:51.933753    8584 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0229 02:15:51.933753    8584 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0229 02:15:51.933753    8584 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0229 02:15:51.933753    8584 command_runner.go:130] > Access: 2024-02-29 02:14:07.987005400 +0000
	I0229 02:15:51.933753    8584 command_runner.go:130] > Modify: 2024-02-23 03:39:37.000000000 +0000
	I0229 02:15:51.933753    8584 command_runner.go:130] > Change: 2024-02-29 02:13:59.368000000 +0000
	I0229 02:15:51.933753    8584 command_runner.go:130] >  Birth: -
	I0229 02:15:51.934743    8584 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0229 02:15:51.934743    8584 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0229 02:15:51.986743    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0229 02:15:53.339082    8584 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0229 02:15:53.347087    8584 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0229 02:15:53.357471    8584 command_runner.go:130] > serviceaccount/kindnet created
	I0229 02:15:53.372482    8584 command_runner.go:130] > daemonset.apps/kindnet created
	I0229 02:15:53.376817    8584 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.3899963s)
	I0229 02:15:53.376885    8584 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 02:15:53.387776    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:15:53.389804    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61 minikube.k8s.io/name=multinode-314500 minikube.k8s.io/updated_at=2024_02_29T02_15_53_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:15:53.410555    8584 command_runner.go:130] > -16
	I0229 02:15:53.410635    8584 ops.go:34] apiserver oom_adj: -16
	I0229 02:15:53.572950    8584 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0229 02:15:53.573242    8584 command_runner.go:130] > node/multinode-314500 labeled
	I0229 02:15:53.583665    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:15:53.702923    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:15:54.086498    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:15:54.213077    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:15:54.589736    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:15:54.707092    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:15:55.094365    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:15:55.219281    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:15:55.594452    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:15:55.714603    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:15:56.086985    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:15:56.210093    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:15:56.594292    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:15:56.710854    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:15:57.092717    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:15:57.202893    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:15:57.596461    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:15:57.709250    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:15:58.097022    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:15:58.207043    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:15:58.585505    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:15:58.700383    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:15:59.087317    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:15:59.199211    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:15:59.589420    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:15:59.709521    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:16:00.099207    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:16:00.248193    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:16:00.587996    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:16:00.710610    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:16:01.089490    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:16:01.210939    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:16:01.588438    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:16:01.719364    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:16:02.095606    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:16:02.219852    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:16:02.583712    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:16:02.688720    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:16:03.085804    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:16:03.198833    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:16:03.589679    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:16:03.697234    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:16:04.094021    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:16:04.277722    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:16:04.585546    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:16:04.713527    8584 command_runner.go:130] > NAME      SECRETS   AGE
	I0229 02:16:04.713527    8584 command_runner.go:130] > default   0         0s
	I0229 02:16:04.713527    8584 kubeadm.go:1088] duration metric: took 11.3359271s to wait for elevateKubeSystemPrivileges.
	I0229 02:16:04.713527    8584 kubeadm.go:406] StartCluster complete in 27.1044579s
	I0229 02:16:04.713527    8584 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:16:04.713527    8584 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 02:16:04.714507    8584 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:16:04.716496    8584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 02:16:04.716496    8584 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 02:16:04.716496    8584 addons.go:69] Setting storage-provisioner=true in profile "multinode-314500"
	I0229 02:16:04.716496    8584 addons.go:234] Setting addon storage-provisioner=true in "multinode-314500"
	I0229 02:16:04.716496    8584 addons.go:69] Setting default-storageclass=true in profile "multinode-314500"
	I0229 02:16:04.716496    8584 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-314500"
	I0229 02:16:04.716496    8584 config.go:182] Loaded profile config "multinode-314500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 02:16:04.716496    8584 host.go:66] Checking if "multinode-314500" exists ...
	I0229 02:16:04.717509    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:16:04.718505    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:16:04.730512    8584 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 02:16:04.731520    8584 kapi.go:59] client config for multinode-314500: &rest.Config{Host:"https://172.19.2.165:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2480600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 02:16:04.732504    8584 cert_rotation.go:137] Starting client certificate rotation controller
	I0229 02:16:04.732504    8584 round_trippers.go:463] GET https://172.19.2.165:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0229 02:16:04.733522    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:04.733522    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:04.733522    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:04.749641    8584 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0229 02:16:04.750464    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:04.750464    8584 round_trippers.go:580]     Audit-Id: 9956226a-c219-49d1-8683-804ff4a7c6af
	I0229 02:16:04.750525    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:04.750525    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:04.750525    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:04.750525    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:04.750525    8584 round_trippers.go:580]     Content-Length: 291
	I0229 02:16:04.750525    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:04 GMT
	I0229 02:16:04.750525    8584 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b4cd7015-a823-43da-bf82-ae91c5678262","resourceVersion":"255","creationTimestamp":"2024-02-29T02:15:51Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0229 02:16:04.751271    8584 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b4cd7015-a823-43da-bf82-ae91c5678262","resourceVersion":"255","creationTimestamp":"2024-02-29T02:15:51Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0229 02:16:04.751368    8584 round_trippers.go:463] PUT https://172.19.2.165:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0229 02:16:04.751368    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:04.751368    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:04.751368    8584 round_trippers.go:473]     Content-Type: application/json
	I0229 02:16:04.751368    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:04.770121    8584 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0229 02:16:04.770435    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:04.770435    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:04.770435    8584 round_trippers.go:580]     Content-Length: 291
	I0229 02:16:04.770435    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:04 GMT
	I0229 02:16:04.770435    8584 round_trippers.go:580]     Audit-Id: 926adfd2-ba76-4038-9182-d6c558cc8d06
	I0229 02:16:04.770435    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:04.770518    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:04.770518    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:04.770518    8584 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b4cd7015-a823-43da-bf82-ae91c5678262","resourceVersion":"337","creationTimestamp":"2024-02-29T02:15:51Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0229 02:16:04.883459    8584 command_runner.go:130] > apiVersion: v1
	I0229 02:16:04.883862    8584 command_runner.go:130] > data:
	I0229 02:16:04.883862    8584 command_runner.go:130] >   Corefile: |
	I0229 02:16:04.884003    8584 command_runner.go:130] >     .:53 {
	I0229 02:16:04.884003    8584 command_runner.go:130] >         errors
	I0229 02:16:04.884003    8584 command_runner.go:130] >         health {
	I0229 02:16:04.884003    8584 command_runner.go:130] >            lameduck 5s
	I0229 02:16:04.884003    8584 command_runner.go:130] >         }
	I0229 02:16:04.884126    8584 command_runner.go:130] >         ready
	I0229 02:16:04.884188    8584 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0229 02:16:04.884188    8584 command_runner.go:130] >            pods insecure
	I0229 02:16:04.884188    8584 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0229 02:16:04.884188    8584 command_runner.go:130] >            ttl 30
	I0229 02:16:04.884188    8584 command_runner.go:130] >         }
	I0229 02:16:04.884188    8584 command_runner.go:130] >         prometheus :9153
	I0229 02:16:04.884188    8584 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0229 02:16:04.884188    8584 command_runner.go:130] >            max_concurrent 1000
	I0229 02:16:04.884188    8584 command_runner.go:130] >         }
	I0229 02:16:04.884188    8584 command_runner.go:130] >         cache 30
	I0229 02:16:04.884188    8584 command_runner.go:130] >         loop
	I0229 02:16:04.884188    8584 command_runner.go:130] >         reload
	I0229 02:16:04.884188    8584 command_runner.go:130] >         loadbalance
	I0229 02:16:04.884188    8584 command_runner.go:130] >     }
	I0229 02:16:04.884188    8584 command_runner.go:130] > kind: ConfigMap
	I0229 02:16:04.884188    8584 command_runner.go:130] > metadata:
	I0229 02:16:04.884188    8584 command_runner.go:130] >   creationTimestamp: "2024-02-29T02:15:51Z"
	I0229 02:16:04.884188    8584 command_runner.go:130] >   name: coredns
	I0229 02:16:04.884188    8584 command_runner.go:130] >   namespace: kube-system
	I0229 02:16:04.884188    8584 command_runner.go:130] >   resourceVersion: "251"
	I0229 02:16:04.884188    8584 command_runner.go:130] >   uid: 3fc93d17-14a4-4d49-9f77-f2cd8adceaed
	I0229 02:16:04.887987    8584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.19.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0229 02:16:05.242860    8584 round_trippers.go:463] GET https://172.19.2.165:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0229 02:16:05.242860    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:05.242860    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:05.242860    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:05.287074    8584 round_trippers.go:574] Response Status: 200 OK in 43 milliseconds
	I0229 02:16:05.287143    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:05.287143    8584 round_trippers.go:580]     Content-Length: 291
	I0229 02:16:05.287213    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:05 GMT
	I0229 02:16:05.287213    8584 round_trippers.go:580]     Audit-Id: e6e6cf94-608a-4333-ac18-3d38f86552f2
	I0229 02:16:05.287213    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:05.287213    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:05.287213    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:05.287213    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:05.289816    8584 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b4cd7015-a823-43da-bf82-ae91c5678262","resourceVersion":"367","creationTimestamp":"2024-02-29T02:15:51Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0229 02:16:05.290759    8584 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-314500" context rescaled to 1 replicas
	I0229 02:16:05.290835    8584 start.go:223] Will wait 6m0s for node &{Name: IP:172.19.2.165 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 02:16:05.291722    8584 out.go:177] * Verifying Kubernetes components...
	I0229 02:16:05.303433    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:16:05.612313    8584 command_runner.go:130] > configmap/coredns replaced
	I0229 02:16:05.617363    8584 start.go:929] {"host.minikube.internal": 172.19.0.1} host record injected into CoreDNS's ConfigMap
	I0229 02:16:05.618519    8584 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 02:16:05.619544    8584 kapi.go:59] client config for multinode-314500: &rest.Config{Host:"https://172.19.2.165:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2480600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 02:16:05.620617    8584 node_ready.go:35] waiting up to 6m0s for node "multinode-314500" to be "Ready" ...
	I0229 02:16:05.620617    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:05.620617    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:05.620617    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:05.620617    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:05.625396    8584 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:16:05.625396    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:05.625396    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:05.625396    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:05.625396    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:05.625396    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:05.625396    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:05 GMT
	I0229 02:16:05.625396    8584 round_trippers.go:580]     Audit-Id: 410524b5-ba74-4eed-b6ad-c164114a2e45
	I0229 02:16:05.626569    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:06.130951    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:06.130951    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:06.130951    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:06.130951    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:06.134758    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:06.135746    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:06.135746    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:06.135746    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:06 GMT
	I0229 02:16:06.135746    8584 round_trippers.go:580]     Audit-Id: d3921daf-0cf7-4693-9c8c-01eed6add86d
	I0229 02:16:06.135746    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:06.135746    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:06.135871    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:06.136309    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:06.622511    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:06.622511    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:06.622511    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:06.622511    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:06.628940    8584 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 02:16:06.628940    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:06.628940    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:06.628940    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:06 GMT
	I0229 02:16:06.628940    8584 round_trippers.go:580]     Audit-Id: dde4c73f-476a-4c04-8fb3-4461985f3b72
	I0229 02:16:06.628940    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:06.628940    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:06.628940    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:06.630172    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:06.883598    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:16:06.883988    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:06.885333    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:16:06.885333    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:06.886306    8584 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:16:06.886086    8584 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 02:16:06.887008    8584 kapi.go:59] client config for multinode-314500: &rest.Config{Host:"https://172.19.2.165:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2480600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 02:16:06.887171    8584 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:16:06.887245    8584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 02:16:06.887293    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:16:06.888249    8584 addons.go:234] Setting addon default-storageclass=true in "multinode-314500"
	I0229 02:16:06.888325    8584 host.go:66] Checking if "multinode-314500" exists ...
	I0229 02:16:06.888997    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:16:07.129415    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:07.129415    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:07.129415    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:07.129415    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:07.137838    8584 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 02:16:07.137912    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:07.137912    8584 round_trippers.go:580]     Audit-Id: 399d2e3f-e8cf-4920-9750-05d41b929aad
	I0229 02:16:07.137912    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:07.137912    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:07.138018    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:07.138048    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:07.138048    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:07 GMT
	I0229 02:16:07.138329    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:07.622304    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:07.622304    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:07.622304    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:07.622304    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:07.633000    8584 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0229 02:16:07.633053    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:07.633053    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:07.633123    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:07.633123    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:07.633123    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:07.633123    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:07 GMT
	I0229 02:16:07.633123    8584 round_trippers.go:580]     Audit-Id: 6c87ad14-b146-42a7-ae05-253fa6399983
	I0229 02:16:07.633497    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:07.634314    8584 node_ready.go:58] node "multinode-314500" has status "Ready":"False"
	I0229 02:16:08.129012    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:08.129128    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:08.129128    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:08.129128    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:08.133061    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:08.133061    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:08.133061    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:08.133061    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:08.133061    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:08 GMT
	I0229 02:16:08.133061    8584 round_trippers.go:580]     Audit-Id: 4486154a-148b-4852-9398-d4ef707b126a
	I0229 02:16:08.133061    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:08.133061    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:08.133587    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:08.622112    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:08.622112    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:08.622112    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:08.622112    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:08.625110    8584 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:16:08.625110    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:08.625110    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:08 GMT
	I0229 02:16:08.625110    8584 round_trippers.go:580]     Audit-Id: 9fda18cb-76a8-4b72-85bc-268e5c5ee771
	I0229 02:16:08.625110    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:08.625110    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:08.625110    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:08.625110    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:08.626110    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:09.062859    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:16:09.062859    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:09.062859    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:16:09.128069    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:16:09.128069    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:09.128168    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:09.128168    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:09.128168    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:09.128168    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:09.128282    8584 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 02:16:09.128363    8584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 02:16:09.128396    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:16:09.132486    8584 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:16:09.132486    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:09.132486    8584 round_trippers.go:580]     Audit-Id: f086feb0-3bd9-4370-9635-53e735870f89
	I0229 02:16:09.132486    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:09.132486    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:09.132486    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:09.132486    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:09.132486    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:09 GMT
	I0229 02:16:09.133491    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:09.626134    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:09.626226    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:09.626226    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:09.626226    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:09.631701    8584 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 02:16:09.631701    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:09.631701    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:09.631701    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:09.631701    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:09.631701    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:09.631701    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:09 GMT
	I0229 02:16:09.631701    8584 round_trippers.go:580]     Audit-Id: a545aa49-b83a-4003-984f-45f9fe202d60
	I0229 02:16:09.631701    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:10.130946    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:10.130946    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:10.130946    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:10.130946    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:10.134969    8584 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:16:10.135394    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:10.135394    8584 round_trippers.go:580]     Audit-Id: 1724a3d5-9143-406a-bca9-05b66a0b2969
	I0229 02:16:10.135394    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:10.135394    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:10.135394    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:10.135394    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:10.135394    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:10 GMT
	I0229 02:16:10.135694    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:10.136156    8584 node_ready.go:58] node "multinode-314500" has status "Ready":"False"
	I0229 02:16:10.622330    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:10.622330    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:10.622420    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:10.622420    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:10.625946    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:10.625946    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:10.625946    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:10 GMT
	I0229 02:16:10.625946    8584 round_trippers.go:580]     Audit-Id: 7d5dc576-023c-4d62-8b5e-1f61e1eb4c92
	I0229 02:16:10.625946    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:10.625946    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:10.625946    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:10.625946    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:10.625946    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:11.130592    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:11.130592    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:11.130686    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:11.130686    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:11.133777    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:11.134244    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:11.134244    8584 round_trippers.go:580]     Audit-Id: 6c014d3d-aaf2-4324-a394-1f4ceda7527a
	I0229 02:16:11.134244    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:11.134244    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:11.134244    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:11.134244    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:11.134244    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:11 GMT
	I0229 02:16:11.134511    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:11.279789    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:16:11.280790    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:11.280889    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:16:11.611705    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:16:11.611705    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:11.613235    8584 sshutil.go:53] new ssh client: &{IP:172.19.2.165 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\id_rsa Username:docker}
	I0229 02:16:11.622115    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:11.622115    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:11.622115    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:11.622115    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:11.626134    8584 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:16:11.626583    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:11.626583    8584 round_trippers.go:580]     Audit-Id: 7ca4c11f-3d0b-4b6a-aeae-c8176d56d748
	I0229 02:16:11.626583    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:11.626583    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:11.626583    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:11.626583    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:11.626583    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:11 GMT
	I0229 02:16:11.626743    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:11.746983    8584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:16:12.129858    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:12.129858    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:12.129858    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:12.129858    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:12.134103    8584 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:16:12.134185    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:12.134185    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:12.134185    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:12.134185    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:12 GMT
	I0229 02:16:12.134185    8584 round_trippers.go:580]     Audit-Id: e300e49e-48d6-4796-b3e3-283ceb52ba8d
	I0229 02:16:12.134185    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:12.134185    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:12.134399    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:12.424764    8584 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0229 02:16:12.424842    8584 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0229 02:16:12.424922    8584 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0229 02:16:12.425012    8584 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0229 02:16:12.425012    8584 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0229 02:16:12.425069    8584 command_runner.go:130] > pod/storage-provisioner created
	I0229 02:16:12.621581    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:12.621581    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:12.621581    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:12.621581    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:12.625839    8584 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:16:12.625917    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:12.625980    8584 round_trippers.go:580]     Audit-Id: ab822128-f5fe-4739-8fe5-bd7b6f1890e7
	I0229 02:16:12.625980    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:12.625980    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:12.625980    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:12.625980    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:12.625980    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:12 GMT
	I0229 02:16:12.626299    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:12.626886    8584 node_ready.go:58] node "multinode-314500" has status "Ready":"False"
	I0229 02:16:13.130997    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:13.130997    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:13.130997    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:13.130997    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:13.137409    8584 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 02:16:13.137482    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:13.137482    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:13.137482    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:13.137482    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:13.137482    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:13.137482    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:13 GMT
	I0229 02:16:13.137482    8584 round_trippers.go:580]     Audit-Id: ba54e846-36f6-446a-839e-4e0e3c8dba08
	I0229 02:16:13.137692    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:13.621687    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:13.621687    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:13.621687    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:13.621687    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:13.624271    8584 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:16:13.625273    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:13.625273    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:13.625273    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:13.625273    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:13.625273    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:13.625273    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:13 GMT
	I0229 02:16:13.625273    8584 round_trippers.go:580]     Audit-Id: 87a66a52-80a0-45f3-8af7-9d492d7d293b
	I0229 02:16:13.625391    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:13.739754    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:16:13.739808    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:13.739808    8584 sshutil.go:53] new ssh client: &{IP:172.19.2.165 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\id_rsa Username:docker}
	I0229 02:16:13.872755    8584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 02:16:14.123275    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:14.123367    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:14.123367    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:14.123367    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:14.126646    8584 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:16:14.126646    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:14.126646    8584 round_trippers.go:580]     Audit-Id: 19134671-8c5f-4095-b846-f6fbd46bcd0b
	I0229 02:16:14.126646    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:14.126646    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:14.126646    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:14.126646    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:14.126747    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:14 GMT
	I0229 02:16:14.127021    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:14.135079    8584 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0229 02:16:14.135079    8584 round_trippers.go:463] GET https://172.19.2.165:8443/apis/storage.k8s.io/v1/storageclasses
	I0229 02:16:14.135079    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:14.135079    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:14.135605    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:14.138653    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:14.138653    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:14.138653    8584 round_trippers.go:580]     Audit-Id: 3a1a0ba3-f2e4-4d64-b6c4-3de42a6386a0
	I0229 02:16:14.138653    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:14.138653    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:14.138653    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:14.138653    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:14.138653    8584 round_trippers.go:580]     Content-Length: 1273
	I0229 02:16:14.138653    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:14 GMT
	I0229 02:16:14.138653    8584 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"413"},"items":[{"metadata":{"name":"standard","uid":"a7ad9511-65e8-4eef-89b4-7c1b803fc689","resourceVersion":"413","creationTimestamp":"2024-02-29T02:16:14Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-02-29T02:16:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0229 02:16:14.138653    8584 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"a7ad9511-65e8-4eef-89b4-7c1b803fc689","resourceVersion":"413","creationTimestamp":"2024-02-29T02:16:14Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-02-29T02:16:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0229 02:16:14.138653    8584 round_trippers.go:463] PUT https://172.19.2.165:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0229 02:16:14.138653    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:14.138653    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:14.138653    8584 round_trippers.go:473]     Content-Type: application/json
	I0229 02:16:14.138653    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:14.143659    8584 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 02:16:14.143659    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:14.143659    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:14.143659    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:14.143659    8584 round_trippers.go:580]     Content-Length: 1220
	I0229 02:16:14.143659    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:14 GMT
	I0229 02:16:14.143659    8584 round_trippers.go:580]     Audit-Id: 0eeb2b85-2218-4fa6-a0d6-7d8e8b89a118
	I0229 02:16:14.143659    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:14.143659    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:14.143659    8584 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"a7ad9511-65e8-4eef-89b4-7c1b803fc689","resourceVersion":"413","creationTimestamp":"2024-02-29T02:16:14Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-02-29T02:16:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0229 02:16:14.144910    8584 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0229 02:16:14.144910    8584 addons.go:505] enable addons completed in 9.4278877s: enabled=[storage-provisioner default-storageclass]
	I0229 02:16:14.631487    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:14.631603    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:14.631603    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:14.631603    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:14.635120    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:14.635120    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:14.635120    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:14.635120    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:14 GMT
	I0229 02:16:14.635120    8584 round_trippers.go:580]     Audit-Id: 448a4e05-de72-4089-adc3-a0cf52036b54
	I0229 02:16:14.635120    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:14.635120    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:14.635120    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:14.635840    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:14.636842    8584 node_ready.go:58] node "multinode-314500" has status "Ready":"False"
	I0229 02:16:15.134789    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:15.134789    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:15.134789    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:15.134789    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:15.138353    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:15.138353    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:15.138353    8584 round_trippers.go:580]     Audit-Id: 0c5724bc-14bf-4e22-8b28-2eed750f5e6b
	I0229 02:16:15.138353    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:15.138353    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:15.138353    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:15.138353    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:15.138353    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:15 GMT
	I0229 02:16:15.139035    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:15.636203    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:15.636203    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:15.636203    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:15.636203    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:15.639886    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:15.639886    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:15.639886    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:15 GMT
	I0229 02:16:15.639886    8584 round_trippers.go:580]     Audit-Id: b2f94694-f112-41b9-8bba-5b0a24ebff15
	I0229 02:16:15.639886    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:15.639886    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:15.639886    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:15.639886    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:15.640603    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:16.124483    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:16.124483    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:16.124483    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:16.124483    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:16.128036    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:16.128036    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:16.128036    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:16.128036    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:16.128036    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:16.128036    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:16 GMT
	I0229 02:16:16.128036    8584 round_trippers.go:580]     Audit-Id: 65ab147a-6009-41b7-8632-6cf748b1a929
	I0229 02:16:16.128036    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:16.128774    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:16.630690    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:16.630690    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:16.630690    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:16.630690    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:16.633754    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:16.634195    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:16.634195    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:16 GMT
	I0229 02:16:16.634195    8584 round_trippers.go:580]     Audit-Id: ba8279af-ce65-46db-a113-cfbea5d58aec
	I0229 02:16:16.634195    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:16.634195    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:16.634195    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:16.634247    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:16.634530    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"416","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I0229 02:16:16.635027    8584 node_ready.go:49] node "multinode-314500" has status "Ready":"True"
	I0229 02:16:16.635027    8584 node_ready.go:38] duration metric: took 11.013794s waiting for node "multinode-314500" to be "Ready" ...
	I0229 02:16:16.635027    8584 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:16:16.635027    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods
	I0229 02:16:16.635027    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:16.635027    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:16.635027    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:16.638680    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:16.638680    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:16.638680    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:16.638680    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:16.638680    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:16.638680    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:16 GMT
	I0229 02:16:16.638680    8584 round_trippers.go:580]     Audit-Id: a971a97c-8e2b-4fb0-abd4-182b3286afda
	I0229 02:16:16.638680    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:16.639968    8584 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"422"},"items":[{"metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"420","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53932 chars]
	I0229 02:16:16.644805    8584 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace to be "Ready" ...
	I0229 02:16:16.644983    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:16:16.644983    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:16.645026    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:16.645026    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:16.649483    8584 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:16:16.649525    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:16.649525    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:16.649525    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:16 GMT
	I0229 02:16:16.649525    8584 round_trippers.go:580]     Audit-Id: 598d01c3-5e69-4f62-935f-f65a0e597752
	I0229 02:16:16.649562    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:16.649618    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:16.649618    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:16.649618    8584 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"420","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0229 02:16:16.650559    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:16.650614    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:16.650614    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:16.650614    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:16.653509    8584 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:16:16.653509    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:16.653509    8584 round_trippers.go:580]     Audit-Id: d4632fc3-b104-4774-9f8d-ad65a9b99634
	I0229 02:16:16.653509    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:16.653509    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:16.653509    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:16.653509    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:16.653509    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:16 GMT
	I0229 02:16:16.653509    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"416","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I0229 02:16:17.153751    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:16:17.153915    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:17.153915    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:17.153915    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:17.157465    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:17.157656    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:17.157656    8584 round_trippers.go:580]     Audit-Id: 58765edd-d51c-4bd1-aba2-02e7a49d9565
	I0229 02:16:17.157656    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:17.157656    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:17.157656    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:17.157656    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:17.157656    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:17 GMT
	I0229 02:16:17.157656    8584 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"420","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0229 02:16:17.159074    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:17.159074    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:17.159198    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:17.159261    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:17.165635    8584 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 02:16:17.165635    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:17.165635    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:17.165635    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:17 GMT
	I0229 02:16:17.165635    8584 round_trippers.go:580]     Audit-Id: e2d12760-48b9-4e0d-bde2-ffc401c1ae39
	I0229 02:16:17.165635    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:17.165635    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:17.165635    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:17.166245    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"416","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I0229 02:16:17.646141    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:16:17.646196    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:17.646264    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:17.646264    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:17.649568    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:17.649568    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:17.649568    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:17.649568    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:17.649568    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:17.649568    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:17 GMT
	I0229 02:16:17.649568    8584 round_trippers.go:580]     Audit-Id: 643bfd9c-db53-4709-889d-f2c3b799b531
	I0229 02:16:17.649568    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:17.649568    8584 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"435","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6282 chars]
	I0229 02:16:17.650897    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:17.650897    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:17.650950    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:17.650950    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:17.653872    8584 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:16:17.653872    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:17.653872    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:17.653969    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:17 GMT
	I0229 02:16:17.654052    8584 round_trippers.go:580]     Audit-Id: bd9d3b81-4e48-4cd4-b61c-872a7afd1012
	I0229 02:16:17.654052    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:17.654052    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:17.654083    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:17.654372    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"416","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I0229 02:16:17.654824    8584 pod_ready.go:92] pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace has status "Ready":"True"
	I0229 02:16:17.654879    8584 pod_ready.go:81] duration metric: took 1.0099842s waiting for pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace to be "Ready" ...
	I0229 02:16:17.654879    8584 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:16:17.655009    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-314500
	I0229 02:16:17.655009    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:17.655009    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:17.655009    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:17.665273    8584 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0229 02:16:17.665273    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:17.665273    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:17.665273    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:17.665273    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:17 GMT
	I0229 02:16:17.665273    8584 round_trippers.go:580]     Audit-Id: 526c5f16-2a66-45ce-8632-d0f9fa5f6ba7
	I0229 02:16:17.665273    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:17.665273    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:17.667768    8584 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-314500","namespace":"kube-system","uid":"6fc42e7c-48f9-46df-bf2f-861e0684e37f","resourceVersion":"323","creationTimestamp":"2024-02-29T02:15:52Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.2.165:2379","kubernetes.io/config.hash":"0b84e88097a2b59a9c108b0f9fa2b889","kubernetes.io/config.mirror":"0b84e88097a2b59a9c108b0f9fa2b889","kubernetes.io/config.seen":"2024-02-29T02:15:52.221392786Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:15:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5852 chars]
	I0229 02:16:17.668271    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:17.668271    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:17.668271    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:17.668271    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:17.677864    8584 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0229 02:16:17.677864    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:17.677864    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:17.677864    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:17.677864    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:17.677864    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:17.677864    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:17 GMT
	I0229 02:16:17.677864    8584 round_trippers.go:580]     Audit-Id: 4d992db8-60ef-49b3-b2e9-0703ba54de12
	I0229 02:16:17.678938    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"416","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I0229 02:16:17.678938    8584 pod_ready.go:92] pod "etcd-multinode-314500" in "kube-system" namespace has status "Ready":"True"
	I0229 02:16:17.678938    8584 pod_ready.go:81] duration metric: took 24.0576ms waiting for pod "etcd-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:16:17.678938    8584 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:16:17.679572    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-314500
	I0229 02:16:17.679572    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:17.679622    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:17.679622    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:17.683833    8584 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:16:17.683833    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:17.683833    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:17.684456    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:17.684456    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:17 GMT
	I0229 02:16:17.684456    8584 round_trippers.go:580]     Audit-Id: 6f1b85da-922b-459d-a8dc-fb211d6b23dc
	I0229 02:16:17.684456    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:17.684456    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:17.684668    8584 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-314500","namespace":"kube-system","uid":"fc266082-ff2c-4bd1-951f-11dc765a28ae","resourceVersion":"303","creationTimestamp":"2024-02-29T02:15:52Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.2.165:8443","kubernetes.io/config.hash":"75abc10fab898952206cc1d682d3c922","kubernetes.io/config.mirror":"75abc10fab898952206cc1d682d3c922","kubernetes.io/config.seen":"2024-02-29T02:15:52.221397486Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:15:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7390 chars]
	I0229 02:16:17.685312    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:17.685312    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:17.685365    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:17.685365    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:17.690438    8584 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 02:16:17.690438    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:17.690438    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:17.690438    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:17.690438    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:17 GMT
	I0229 02:16:17.690438    8584 round_trippers.go:580]     Audit-Id: f815fd6b-646c-44c0-9468-208bff1f7a45
	I0229 02:16:17.690438    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:17.690438    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:17.690823    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"416","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I0229 02:16:17.691302    8584 pod_ready.go:92] pod "kube-apiserver-multinode-314500" in "kube-system" namespace has status "Ready":"True"
	I0229 02:16:17.691364    8584 pod_ready.go:81] duration metric: took 12.4254ms waiting for pod "kube-apiserver-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:16:17.691364    8584 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:16:17.691491    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-314500
	I0229 02:16:17.691491    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:17.691491    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:17.691491    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:17.693699    8584 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:16:17.694098    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:17.694098    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:17.694098    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:17 GMT
	I0229 02:16:17.694098    8584 round_trippers.go:580]     Audit-Id: bb9e2109-665e-49c3-ac65-cbc158c70f3e
	I0229 02:16:17.694098    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:17.694195    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:17.694195    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:17.694402    8584 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-314500","namespace":"kube-system","uid":"58e57902-e113-44a9-b5b5-4aba2ba13491","resourceVersion":"302","creationTimestamp":"2024-02-29T02:15:52Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"46f4a0cce9ca64e19c1ad09d6f30ce1e","kubernetes.io/config.mirror":"46f4a0cce9ca64e19c1ad09d6f30ce1e","kubernetes.io/config.seen":"2024-02-29T02:15:52.221398986Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:15:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6965 chars]
	I0229 02:16:17.695017    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:17.695067    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:17.695067    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:17.695067    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:17.698234    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:17.698281    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:17.698281    8584 round_trippers.go:580]     Audit-Id: 148ca40f-d5fb-49be-8b8a-09cc4e3afa18
	I0229 02:16:17.698281    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:17.698281    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:17.698339    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:17.698339    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:17.698388    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:17 GMT
	I0229 02:16:17.699249    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"416","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I0229 02:16:17.699313    8584 pod_ready.go:92] pod "kube-controller-manager-multinode-314500" in "kube-system" namespace has status "Ready":"True"
	I0229 02:16:17.699313    8584 pod_ready.go:81] duration metric: took 7.8948ms waiting for pod "kube-controller-manager-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:16:17.699313    8584 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6r6j4" in "kube-system" namespace to be "Ready" ...
	I0229 02:16:17.699313    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6r6j4
	I0229 02:16:17.699313    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:17.699313    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:17.699313    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:17.702891    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:17.703633    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:17.703633    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:17 GMT
	I0229 02:16:17.703633    8584 round_trippers.go:580]     Audit-Id: 7025cb07-a461-4530-bdd7-f2453b2a2350
	I0229 02:16:17.703633    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:17.703633    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:17.703633    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:17.703633    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:17.703905    8584 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6r6j4","generateName":"kube-proxy-","namespace":"kube-system","uid":"2b84b22d-3786-4f9e-a23a-c7cfc93bb671","resourceVersion":"394","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"99934fe5-0d72-4e83-8f59-4a0b59969008","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"99934fe5-0d72-4e83-8f59-4a0b59969008\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
	I0229 02:16:17.832880    8584 request.go:629] Waited for 126.8086ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:17.832880    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:17.832880    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:17.832880    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:17.832880    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:17.836917    8584 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:16:17.836917    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:17.836917    8584 round_trippers.go:580]     Audit-Id: 52268278-8be7-4449-a4bc-d534692682ee
	I0229 02:16:17.836917    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:17.836917    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:17.836917    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:17.836917    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:17.836917    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:18 GMT
	I0229 02:16:17.837455    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"416","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I0229 02:16:17.837896    8584 pod_ready.go:92] pod "kube-proxy-6r6j4" in "kube-system" namespace has status "Ready":"True"
	I0229 02:16:17.837896    8584 pod_ready.go:81] duration metric: took 138.5747ms waiting for pod "kube-proxy-6r6j4" in "kube-system" namespace to be "Ready" ...
	I0229 02:16:17.837896    8584 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:16:18.036948    8584 request.go:629] Waited for 198.7966ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-314500
	I0229 02:16:18.037077    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-314500
	I0229 02:16:18.037077    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:18.037077    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:18.037077    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:18.040666    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:18.040666    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:18.040666    8584 round_trippers.go:580]     Audit-Id: 97ba3e81-c240-4d8f-a9e6-117a64b5672c
	I0229 02:16:18.041515    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:18.041515    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:18.041515    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:18.041515    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:18.041515    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:18 GMT
	I0229 02:16:18.041693    8584 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-314500","namespace":"kube-system","uid":"31fcecc6-17de-43a6-892d-37cd915de64b","resourceVersion":"288","creationTimestamp":"2024-02-29T02:15:52Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"3d9a79ff068a0922524863a8caa5053a","kubernetes.io/config.mirror":"3d9a79ff068a0922524863a8caa5053a","kubernetes.io/config.seen":"2024-02-29T02:15:52.221399886Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:15:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4695 chars]
	I0229 02:16:18.240929    8584 request.go:629] Waited for 198.242ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:18.241375    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:18.241435    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:18.241435    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:18.241435    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:18.244752    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:18.245526    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:18.245526    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:18.245611    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:18 GMT
	I0229 02:16:18.245611    8584 round_trippers.go:580]     Audit-Id: 7935c4ce-ff7f-4b35-bff9-a77da52c6dda
	I0229 02:16:18.245611    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:18.245611    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:18.245611    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:18.245611    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"416","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I0229 02:16:18.246214    8584 pod_ready.go:92] pod "kube-scheduler-multinode-314500" in "kube-system" namespace has status "Ready":"True"
	I0229 02:16:18.246214    8584 pod_ready.go:81] duration metric: took 408.2266ms waiting for pod "kube-scheduler-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:16:18.246214    8584 pod_ready.go:38] duration metric: took 1.6110974s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:16:18.246214    8584 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:16:18.257038    8584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:18.283407    8584 command_runner.go:130] > 2018
	I0229 02:16:18.283407    8584 api_server.go:72] duration metric: took 12.9918453s to wait for apiserver process to appear ...
	I0229 02:16:18.283407    8584 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:16:18.283407    8584 api_server.go:253] Checking apiserver healthz at https://172.19.2.165:8443/healthz ...
	I0229 02:16:18.292685    8584 api_server.go:279] https://172.19.2.165:8443/healthz returned 200:
	ok
	I0229 02:16:18.293146    8584 round_trippers.go:463] GET https://172.19.2.165:8443/version
	I0229 02:16:18.293146    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:18.293146    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:18.293146    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:18.296745    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:18.296766    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:18.296766    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:18 GMT
	I0229 02:16:18.296844    8584 round_trippers.go:580]     Audit-Id: a3568257-7ba8-46aa-906e-199f937d3cb2
	I0229 02:16:18.296844    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:18.296844    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:18.296844    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:18.296844    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:18.296844    8584 round_trippers.go:580]     Content-Length: 264
	I0229 02:16:18.296933    8584 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0229 02:16:18.297126    8584 api_server.go:141] control plane version: v1.28.4
	I0229 02:16:18.297126    8584 api_server.go:131] duration metric: took 13.7187ms to wait for apiserver health ...
	I0229 02:16:18.297126    8584 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:16:18.441150    8584 request.go:629] Waited for 143.8801ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods
	I0229 02:16:18.441150    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods
	I0229 02:16:18.441150    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:18.441150    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:18.441150    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:18.446130    8584 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:16:18.446130    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:18.446130    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:18 GMT
	I0229 02:16:18.446130    8584 round_trippers.go:580]     Audit-Id: ad8e47f3-2e6e-4c08-9bc9-672b7124a085
	I0229 02:16:18.446130    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:18.446130    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:18.446130    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:18.446130    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:18.447912    8584 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"439"},"items":[{"metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"435","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54048 chars]
	I0229 02:16:18.450435    8584 system_pods.go:59] 8 kube-system pods found
	I0229 02:16:18.450435    8584 system_pods.go:61] "coredns-5dd5756b68-8g6tg" [ef7fb259-9f24-4645-9eff-2b16f6789e1b] Running
	I0229 02:16:18.450435    8584 system_pods.go:61] "etcd-multinode-314500" [6fc42e7c-48f9-46df-bf2f-861e0684e37f] Running
	I0229 02:16:18.450435    8584 system_pods.go:61] "kindnet-t9r77" [4620d417-744c-4049-82ab-79d1ee7f047c] Running
	I0229 02:16:18.450435    8584 system_pods.go:61] "kube-apiserver-multinode-314500" [fc266082-ff2c-4bd1-951f-11dc765a28ae] Running
	I0229 02:16:18.450435    8584 system_pods.go:61] "kube-controller-manager-multinode-314500" [58e57902-e113-44a9-b5b5-4aba2ba13491] Running
	I0229 02:16:18.450435    8584 system_pods.go:61] "kube-proxy-6r6j4" [2b84b22d-3786-4f9e-a23a-c7cfc93bb671] Running
	I0229 02:16:18.450435    8584 system_pods.go:61] "kube-scheduler-multinode-314500" [31fcecc6-17de-43a6-892d-37cd915de64b] Running
	I0229 02:16:18.450435    8584 system_pods.go:61] "storage-provisioner" [9780520b-8ff9-408a-ab6f-41b63790ccd1] Running
	I0229 02:16:18.450435    8584 system_pods.go:74] duration metric: took 153.3001ms to wait for pod list to return data ...
	I0229 02:16:18.450435    8584 default_sa.go:34] waiting for default service account to be created ...
	I0229 02:16:18.641470    8584 request.go:629] Waited for 191.0243ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.165:8443/api/v1/namespaces/default/serviceaccounts
	I0229 02:16:18.641470    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/default/serviceaccounts
	I0229 02:16:18.641470    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:18.641470    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:18.641470    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:18.645874    8584 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:16:18.645874    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:18.645874    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:18 GMT
	I0229 02:16:18.645874    8584 round_trippers.go:580]     Audit-Id: 9e04f5c6-c753-4db9-b22e-07bcf383223a
	I0229 02:16:18.646835    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:18.646835    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:18.646835    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:18.646835    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:18.646835    8584 round_trippers.go:580]     Content-Length: 261
	I0229 02:16:18.646835    8584 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"439"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"a442432a-e4e1-4889-bfa8-e3967acc17f0","resourceVersion":"330","creationTimestamp":"2024-02-29T02:16:04Z"}}]}
	I0229 02:16:18.646835    8584 default_sa.go:45] found service account: "default"
	I0229 02:16:18.646835    8584 default_sa.go:55] duration metric: took 196.3895ms for default service account to be created ...
	I0229 02:16:18.646835    8584 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 02:16:18.844094    8584 request.go:629] Waited for 197.2476ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods
	I0229 02:16:18.844094    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods
	I0229 02:16:18.844094    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:18.844094    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:18.844094    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:18.848446    8584 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:16:18.848446    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:18.848446    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:18.848446    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:18.848446    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:19 GMT
	I0229 02:16:18.848446    8584 round_trippers.go:580]     Audit-Id: 5e1f0b6a-e7f1-4363-96af-41558a1cff57
	I0229 02:16:18.848446    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:18.848446    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:18.850291    8584 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"439"},"items":[{"metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"435","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54048 chars]
	I0229 02:16:18.852542    8584 system_pods.go:86] 8 kube-system pods found
	I0229 02:16:18.852542    8584 system_pods.go:89] "coredns-5dd5756b68-8g6tg" [ef7fb259-9f24-4645-9eff-2b16f6789e1b] Running
	I0229 02:16:18.852542    8584 system_pods.go:89] "etcd-multinode-314500" [6fc42e7c-48f9-46df-bf2f-861e0684e37f] Running
	I0229 02:16:18.852542    8584 system_pods.go:89] "kindnet-t9r77" [4620d417-744c-4049-82ab-79d1ee7f047c] Running
	I0229 02:16:18.852542    8584 system_pods.go:89] "kube-apiserver-multinode-314500" [fc266082-ff2c-4bd1-951f-11dc765a28ae] Running
	I0229 02:16:18.852542    8584 system_pods.go:89] "kube-controller-manager-multinode-314500" [58e57902-e113-44a9-b5b5-4aba2ba13491] Running
	I0229 02:16:18.852542    8584 system_pods.go:89] "kube-proxy-6r6j4" [2b84b22d-3786-4f9e-a23a-c7cfc93bb671] Running
	I0229 02:16:18.852542    8584 system_pods.go:89] "kube-scheduler-multinode-314500" [31fcecc6-17de-43a6-892d-37cd915de64b] Running
	I0229 02:16:18.852542    8584 system_pods.go:89] "storage-provisioner" [9780520b-8ff9-408a-ab6f-41b63790ccd1] Running
	I0229 02:16:18.852542    8584 system_pods.go:126] duration metric: took 205.6953ms to wait for k8s-apps to be running ...
	I0229 02:16:18.852542    8584 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 02:16:18.861417    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:16:18.887054    8584 system_svc.go:56] duration metric: took 34.4312ms WaitForService to wait for kubelet.
	I0229 02:16:18.887149    8584 kubeadm.go:581] duration metric: took 13.5955543s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 02:16:18.887215    8584 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:16:19.031410    8584 request.go:629] Waited for 144.1874ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.165:8443/api/v1/nodes
	I0229 02:16:19.031606    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes
	I0229 02:16:19.031606    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:19.031606    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:19.031606    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:19.035104    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:19.035104    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:19.035104    8584 round_trippers.go:580]     Audit-Id: 3e53124b-3fb7-4d71-a89e-22e59922a676
	I0229 02:16:19.035104    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:19.035104    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:19.035507    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:19.035507    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:19.035507    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:19 GMT
	I0229 02:16:19.035795    8584 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"439"},"items":[{"metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"416","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4834 chars]
	I0229 02:16:19.036569    8584 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:16:19.036646    8584 node_conditions.go:123] node cpu capacity is 2
	I0229 02:16:19.036646    8584 node_conditions.go:105] duration metric: took 149.4233ms to run NodePressure ...
	I0229 02:16:19.036755    8584 start.go:228] waiting for startup goroutines ...
	I0229 02:16:19.036755    8584 start.go:233] waiting for cluster config update ...
	I0229 02:16:19.036755    8584 start.go:242] writing updated cluster config ...
	I0229 02:16:19.038683    8584 out.go:177] 
	I0229 02:16:19.055810    8584 config.go:182] Loaded profile config "multinode-314500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 02:16:19.055971    8584 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\config.json ...
	I0229 02:16:19.059124    8584 out.go:177] * Starting worker node multinode-314500-m02 in cluster multinode-314500
	I0229 02:16:19.059762    8584 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 02:16:19.059762    8584 cache.go:56] Caching tarball of preloaded images
	I0229 02:16:19.060125    8584 preload.go:174] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 02:16:19.060125    8584 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0229 02:16:19.060125    8584 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\config.json ...
	I0229 02:16:19.069726    8584 start.go:365] acquiring machines lock for multinode-314500-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 02:16:19.070853    8584 start.go:369] acquired machines lock for "multinode-314500-m02" in 145.1µs
	I0229 02:16:19.071032    8584 start.go:93] Provisioning new machine with config: &{Name:multinode-314500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-314500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.19.2.165 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequ
ested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0229 02:16:19.071032    8584 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0229 02:16:19.071291    8584 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0229 02:16:19.071291    8584 start.go:159] libmachine.API.Create for "multinode-314500" (driver="hyperv")
	I0229 02:16:19.071291    8584 client.go:168] LocalClient.Create starting
	I0229 02:16:19.072518    8584 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0229 02:16:19.072841    8584 main.go:141] libmachine: Decoding PEM data...
	I0229 02:16:19.072841    8584 main.go:141] libmachine: Parsing certificate...
	I0229 02:16:19.073047    8584 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0229 02:16:19.073192    8584 main.go:141] libmachine: Decoding PEM data...
	I0229 02:16:19.073192    8584 main.go:141] libmachine: Parsing certificate...
	I0229 02:16:19.073192    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0229 02:16:20.920705    8584 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0229 02:16:20.920705    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:20.921317    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0229 02:16:22.576054    8584 main.go:141] libmachine: [stdout =====>] : False
	
	I0229 02:16:22.576118    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:22.576186    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0229 02:16:24.018073    8584 main.go:141] libmachine: [stdout =====>] : True
	
	I0229 02:16:24.018073    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:24.018073    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0229 02:16:27.519984    8584 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0229 02:16:27.521004    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:27.522825    8584 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0229 02:16:27.901527    8584 main.go:141] libmachine: Creating SSH key...
	I0229 02:16:28.097501    8584 main.go:141] libmachine: Creating VM...
	I0229 02:16:28.097501    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0229 02:16:30.904965    8584 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0229 02:16:30.905182    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:30.905182    8584 main.go:141] libmachine: Using switch "Default Switch"
	I0229 02:16:30.905182    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0229 02:16:32.604830    8584 main.go:141] libmachine: [stdout =====>] : True
	
	I0229 02:16:32.604830    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:32.604830    8584 main.go:141] libmachine: Creating VHD
	I0229 02:16:32.604937    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0229 02:16:36.234786    8584 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 68DA3A88-B6E1-46DA-93D1-804B8B5EA2B6
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0229 02:16:36.234786    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:36.234786    8584 main.go:141] libmachine: Writing magic tar header
	I0229 02:16:36.235274    8584 main.go:141] libmachine: Writing SSH key tar header
	I0229 02:16:36.244776    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0229 02:16:39.318116    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:16:39.318116    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:39.318116    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m02\disk.vhd' -SizeBytes 20000MB
	I0229 02:16:41.733381    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:16:41.733986    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:41.734091    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-314500-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0229 02:16:45.142995    8584 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-314500-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0229 02:16:45.142995    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:45.143938    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-314500-m02 -DynamicMemoryEnabled $false
	I0229 02:16:47.265484    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:16:47.265484    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:47.265616    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-314500-m02 -Count 2
	I0229 02:16:49.321416    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:16:49.321772    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:49.321890    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-314500-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m02\boot2docker.iso'
	I0229 02:16:51.771609    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:16:51.771808    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:51.771808    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-314500-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m02\disk.vhd'
	I0229 02:16:54.237843    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:16:54.238288    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:54.238288    8584 main.go:141] libmachine: Starting VM...
	I0229 02:16:54.238364    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-314500-m02
	I0229 02:16:56.948503    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:16:56.948564    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:56.948564    8584 main.go:141] libmachine: Waiting for host to start...
	I0229 02:16:56.948691    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:16:59.081484    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:16:59.081484    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:59.081484    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:17:01.451137    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:17:01.451137    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:02.451735    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:17:04.482600    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:17:04.482600    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:04.482600    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:17:06.855829    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:17:06.855829    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:07.863335    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:17:09.971502    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:17:09.971502    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:09.971663    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:17:12.324229    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:17:12.324333    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:13.330922    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:17:15.391366    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:17:15.391404    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:15.391404    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:17:17.718844    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:17:17.718973    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:18.726464    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:17:20.785794    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:17:20.785794    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:20.785794    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:17:23.184930    8584 main.go:141] libmachine: [stdout =====>] : 172.19.5.202
	
	I0229 02:17:23.184930    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:23.185003    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:17:25.185603    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:17:25.185847    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:25.185847    8584 machine.go:88] provisioning docker machine ...
	I0229 02:17:25.185847    8584 buildroot.go:166] provisioning hostname "multinode-314500-m02"
	I0229 02:17:25.185847    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:17:27.225297    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:17:27.226441    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:27.226473    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:17:29.607904    8584 main.go:141] libmachine: [stdout =====>] : 172.19.5.202
	
	I0229 02:17:29.607904    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:29.612460    8584 main.go:141] libmachine: Using SSH client type: native
	I0229 02:17:29.622734    8584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.5.202 22 <nil> <nil>}
	I0229 02:17:29.622734    8584 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-314500-m02 && echo "multinode-314500-m02" | sudo tee /etc/hostname
	I0229 02:17:29.783303    8584 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-314500-m02
	
	I0229 02:17:29.783303    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:17:31.813172    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:17:31.813290    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:31.813290    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:17:34.232804    8584 main.go:141] libmachine: [stdout =====>] : 172.19.5.202
	
	I0229 02:17:34.233345    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:34.237405    8584 main.go:141] libmachine: Using SSH client type: native
	I0229 02:17:34.237468    8584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.5.202 22 <nil> <nil>}
	I0229 02:17:34.237468    8584 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-314500-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-314500-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-314500-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:17:34.392771    8584 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:17:34.392771    8584 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0229 02:17:34.392853    8584 buildroot.go:174] setting up certificates
	I0229 02:17:34.392853    8584 provision.go:83] configureAuth start
	I0229 02:17:34.392853    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:17:36.409714    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:17:36.409714    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:36.409926    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:17:38.862723    8584 main.go:141] libmachine: [stdout =====>] : 172.19.5.202
	
	I0229 02:17:38.862870    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:38.862870    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:17:40.858876    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:17:40.859201    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:40.859201    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:17:43.234342    8584 main.go:141] libmachine: [stdout =====>] : 172.19.5.202
	
	I0229 02:17:43.234419    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:43.234419    8584 provision.go:138] copyHostCerts
	I0229 02:17:43.234567    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0229 02:17:43.234765    8584 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0229 02:17:43.234765    8584 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0229 02:17:43.235285    8584 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0229 02:17:43.236034    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0229 02:17:43.236034    8584 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0229 02:17:43.236034    8584 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0229 02:17:43.236034    8584 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1675 bytes)
	I0229 02:17:43.236807    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0229 02:17:43.237396    8584 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0229 02:17:43.237396    8584 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0229 02:17:43.237497    8584 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0229 02:17:43.238127    8584 provision.go:112] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-314500-m02 san=[172.19.5.202 172.19.5.202 localhost 127.0.0.1 minikube multinode-314500-m02]
	I0229 02:17:43.524218    8584 provision.go:172] copyRemoteCerts
	I0229 02:17:43.533207    8584 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:17:43.533207    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:17:45.530673    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:17:45.530673    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:45.530747    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:17:47.941248    8584 main.go:141] libmachine: [stdout =====>] : 172.19.5.202
	
	I0229 02:17:47.941248    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:47.942211    8584 sshutil.go:53] new ssh client: &{IP:172.19.5.202 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m02\id_rsa Username:docker}
	I0229 02:17:48.060802    8584 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5273422s)
	I0229 02:17:48.060802    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0229 02:17:48.061398    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 02:17:48.106726    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0229 02:17:48.107259    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1237 bytes)
	I0229 02:17:48.151608    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0229 02:17:48.152143    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 02:17:48.200186    8584 provision.go:86] duration metric: configureAuth took 13.8065619s
	I0229 02:17:48.200186    8584 buildroot.go:189] setting minikube options for container-runtime
	I0229 02:17:48.200842    8584 config.go:182] Loaded profile config "multinode-314500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 02:17:48.200920    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:17:50.211498    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:17:50.211498    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:50.211498    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:17:52.592758    8584 main.go:141] libmachine: [stdout =====>] : 172.19.5.202
	
	I0229 02:17:52.592758    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:52.597792    8584 main.go:141] libmachine: Using SSH client type: native
	I0229 02:17:52.598309    8584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.5.202 22 <nil> <nil>}
	I0229 02:17:52.598381    8584 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 02:17:52.757991    8584 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0229 02:17:52.757991    8584 buildroot.go:70] root file system type: tmpfs
	I0229 02:17:52.757991    8584 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 02:17:52.758523    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:17:54.794561    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:17:54.794987    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:54.795068    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:17:57.208524    8584 main.go:141] libmachine: [stdout =====>] : 172.19.5.202
	
	I0229 02:17:57.208524    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:57.212707    8584 main.go:141] libmachine: Using SSH client type: native
	I0229 02:17:57.213061    8584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.5.202 22 <nil> <nil>}
	I0229 02:17:57.213061    8584 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.2.165"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 02:17:57.378362    8584 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.2.165
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 02:17:57.378395    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:17:59.428307    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:17:59.428307    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:59.428307    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:18:01.824823    8584 main.go:141] libmachine: [stdout =====>] : 172.19.5.202
	
	I0229 02:18:01.824823    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:18:01.828335    8584 main.go:141] libmachine: Using SSH client type: native
	I0229 02:18:01.828927    8584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.5.202 22 <nil> <nil>}
	I0229 02:18:01.828927    8584 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 02:18:02.863847    8584 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0229 02:18:02.863847    8584 machine.go:91] provisioned docker machine in 37.6758983s
	I0229 02:18:02.863847    8584 client.go:171] LocalClient.Create took 1m43.7867595s
	I0229 02:18:02.864958    8584 start.go:167] duration metric: libmachine.API.Create for "multinode-314500" took 1m43.78787s
	I0229 02:18:02.864958    8584 start.go:300] post-start starting for "multinode-314500-m02" (driver="hyperv")
	I0229 02:18:02.864958    8584 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:18:02.874256    8584 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:18:02.874256    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:18:04.910564    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:18:04.910633    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:18:04.910703    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:18:07.378336    8584 main.go:141] libmachine: [stdout =====>] : 172.19.5.202
	
	I0229 02:18:07.378336    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:18:07.378487    8584 sshutil.go:53] new ssh client: &{IP:172.19.5.202 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m02\id_rsa Username:docker}
	I0229 02:18:07.486010    8584 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6114972s)
	I0229 02:18:07.496984    8584 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:18:07.504935    8584 command_runner.go:130] > NAME=Buildroot
	I0229 02:18:07.504935    8584 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0229 02:18:07.504935    8584 command_runner.go:130] > ID=buildroot
	I0229 02:18:07.504935    8584 command_runner.go:130] > VERSION_ID=2023.02.9
	I0229 02:18:07.504935    8584 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0229 02:18:07.505148    8584 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 02:18:07.505148    8584 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0229 02:18:07.505545    8584 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0229 02:18:07.508348    8584 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem -> 33122.pem in /etc/ssl/certs
	I0229 02:18:07.508348    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem -> /etc/ssl/certs/33122.pem
	I0229 02:18:07.517641    8584 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 02:18:07.536722    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem --> /etc/ssl/certs/33122.pem (1708 bytes)
	I0229 02:18:07.582613    8584 start.go:303] post-start completed in 4.7173917s
	I0229 02:18:07.584757    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:18:09.616749    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:18:09.616749    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:18:09.617616    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:18:12.029126    8584 main.go:141] libmachine: [stdout =====>] : 172.19.5.202
	
	I0229 02:18:12.029126    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:18:12.029537    8584 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\config.json ...
	I0229 02:18:12.031412    8584 start.go:128] duration metric: createHost completed in 1m52.9539719s
	I0229 02:18:12.031412    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:18:14.046188    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:18:14.046538    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:18:14.046589    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:18:16.455401    8584 main.go:141] libmachine: [stdout =====>] : 172.19.5.202
	
	I0229 02:18:16.455976    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:18:16.461299    8584 main.go:141] libmachine: Using SSH client type: native
	I0229 02:18:16.461877    8584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.5.202 22 <nil> <nil>}
	I0229 02:18:16.461877    8584 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 02:18:16.593240    8584 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709173096.763630370
	
	I0229 02:18:16.593344    8584 fix.go:206] guest clock: 1709173096.763630370
	I0229 02:18:16.593344    8584 fix.go:219] Guest: 2024-02-29 02:18:16.76363037 +0000 UTC Remote: 2024-02-29 02:18:12.0314125 +0000 UTC m=+312.004845001 (delta=4.73221787s)
	I0229 02:18:16.593455    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:18:18.589352    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:18:18.589352    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:18:18.589352    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:18:21.027873    8584 main.go:141] libmachine: [stdout =====>] : 172.19.5.202
	
	I0229 02:18:21.027947    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:18:21.033045    8584 main.go:141] libmachine: Using SSH client type: native
	I0229 02:18:21.033045    8584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.5.202 22 <nil> <nil>}
	I0229 02:18:21.033569    8584 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709173096
	I0229 02:18:21.167765    8584 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Feb 29 02:18:16 UTC 2024
	
	I0229 02:18:21.167765    8584 fix.go:226] clock set: Thu Feb 29 02:18:16 UTC 2024
	 (err=<nil>)
	I0229 02:18:21.167765    8584 start.go:83] releasing machines lock for "multinode-314500-m02", held for 2m2.0900438s
	I0229 02:18:21.167765    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:18:23.153744    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:18:23.153744    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:18:23.153744    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:18:25.578574    8584 main.go:141] libmachine: [stdout =====>] : 172.19.5.202
	
	I0229 02:18:25.578574    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:18:25.578800    8584 out.go:177] * Found network options:
	I0229 02:18:25.580065    8584 out.go:177]   - NO_PROXY=172.19.2.165
	W0229 02:18:25.580612    8584 proxy.go:119] fail to check proxy env: Error ip not in block
	I0229 02:18:25.580835    8584 out.go:177]   - NO_PROXY=172.19.2.165
	W0229 02:18:25.581420    8584 proxy.go:119] fail to check proxy env: Error ip not in block
	W0229 02:18:25.583050    8584 proxy.go:119] fail to check proxy env: Error ip not in block
	I0229 02:18:25.585206    8584 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:18:25.585373    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:18:25.593744    8584 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0229 02:18:25.594079    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:18:27.674183    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:18:27.674183    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:18:27.674183    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:18:27.675185    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:18:27.675185    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:18:27.675185    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:18:30.173701    8584 main.go:141] libmachine: [stdout =====>] : 172.19.5.202
	
	I0229 02:18:30.174284    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:18:30.174503    8584 sshutil.go:53] new ssh client: &{IP:172.19.5.202 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m02\id_rsa Username:docker}
	I0229 02:18:30.199190    8584 main.go:141] libmachine: [stdout =====>] : 172.19.5.202
	
	I0229 02:18:30.199190    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:18:30.199500    8584 sshutil.go:53] new ssh client: &{IP:172.19.5.202 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m02\id_rsa Username:docker}
	I0229 02:18:30.277565    8584 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0229 02:18:30.278069    8584 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.6840656s)
	W0229 02:18:30.278069    8584 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 02:18:30.290955    8584 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:18:30.389381    8584 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0229 02:18:30.389381    8584 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0229 02:18:30.389381    8584 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.8038229s)
	I0229 02:18:30.389381    8584 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 02:18:30.389381    8584 start.go:475] detecting cgroup driver to use...
	I0229 02:18:30.389381    8584 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:18:30.425450    8584 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0229 02:18:30.436466    8584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0229 02:18:30.467218    8584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 02:18:30.486122    8584 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 02:18:30.494627    8584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 02:18:30.522647    8584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 02:18:30.553444    8584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 02:18:30.581124    8584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 02:18:30.616953    8584 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:18:30.644924    8584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 02:18:30.674292    8584 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:18:30.691155    8584 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0229 02:18:30.703168    8584 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:18:30.731843    8584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:18:30.943189    8584 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 02:18:30.974201    8584 start.go:475] detecting cgroup driver to use...
	I0229 02:18:30.984195    8584 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 02:18:31.010398    8584 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0229 02:18:31.010398    8584 command_runner.go:130] > [Unit]
	I0229 02:18:31.010398    8584 command_runner.go:130] > Description=Docker Application Container Engine
	I0229 02:18:31.010398    8584 command_runner.go:130] > Documentation=https://docs.docker.com
	I0229 02:18:31.010398    8584 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0229 02:18:31.010398    8584 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0229 02:18:31.010398    8584 command_runner.go:130] > StartLimitBurst=3
	I0229 02:18:31.010398    8584 command_runner.go:130] > StartLimitIntervalSec=60
	I0229 02:18:31.010398    8584 command_runner.go:130] > [Service]
	I0229 02:18:31.010398    8584 command_runner.go:130] > Type=notify
	I0229 02:18:31.010398    8584 command_runner.go:130] > Restart=on-failure
	I0229 02:18:31.010398    8584 command_runner.go:130] > Environment=NO_PROXY=172.19.2.165
	I0229 02:18:31.010398    8584 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0229 02:18:31.010398    8584 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0229 02:18:31.010398    8584 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0229 02:18:31.010931    8584 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0229 02:18:31.010981    8584 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0229 02:18:31.011019    8584 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0229 02:18:31.011019    8584 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0229 02:18:31.011082    8584 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0229 02:18:31.011138    8584 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0229 02:18:31.011138    8584 command_runner.go:130] > ExecStart=
	I0229 02:18:31.011197    8584 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0229 02:18:31.011243    8584 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0229 02:18:31.011243    8584 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0229 02:18:31.011315    8584 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0229 02:18:31.011359    8584 command_runner.go:130] > LimitNOFILE=infinity
	I0229 02:18:31.011359    8584 command_runner.go:130] > LimitNPROC=infinity
	I0229 02:18:31.011359    8584 command_runner.go:130] > LimitCORE=infinity
	I0229 02:18:31.011425    8584 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0229 02:18:31.011425    8584 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0229 02:18:31.011425    8584 command_runner.go:130] > TasksMax=infinity
	I0229 02:18:31.011495    8584 command_runner.go:130] > TimeoutStartSec=0
	I0229 02:18:31.011495    8584 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0229 02:18:31.011495    8584 command_runner.go:130] > Delegate=yes
	I0229 02:18:31.011557    8584 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0229 02:18:31.011557    8584 command_runner.go:130] > KillMode=process
	I0229 02:18:31.011557    8584 command_runner.go:130] > [Install]
	I0229 02:18:31.011626    8584 command_runner.go:130] > WantedBy=multi-user.target
	I0229 02:18:31.022514    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:18:31.053734    8584 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 02:18:31.093320    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:18:31.125810    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 02:18:31.159106    8584 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 02:18:31.209007    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 02:18:31.236274    8584 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:18:31.271193    8584 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0229 02:18:31.283174    8584 ssh_runner.go:195] Run: which cri-dockerd
	I0229 02:18:31.290285    8584 command_runner.go:130] > /usr/bin/cri-dockerd
	I0229 02:18:31.300670    8584 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 02:18:31.320930    8584 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0229 02:18:31.363898    8584 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 02:18:31.567044    8584 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 02:18:31.755853    8584 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 02:18:31.755981    8584 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 02:18:31.800154    8584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:18:32.002260    8584 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 02:18:33.510987    8584 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5086429s)
	I0229 02:18:33.521617    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0229 02:18:33.555076    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 02:18:33.593354    8584 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0229 02:18:33.787890    8584 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0229 02:18:34.002397    8584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:18:34.193768    8584 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0229 02:18:34.233767    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 02:18:34.268183    8584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:18:34.461138    8584 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0229 02:18:34.565934    8584 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0229 02:18:34.575816    8584 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0229 02:18:34.586219    8584 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0229 02:18:34.586284    8584 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0229 02:18:34.586284    8584 command_runner.go:130] > Device: 0,22	Inode: 891         Links: 1
	I0229 02:18:34.586284    8584 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0229 02:18:34.586284    8584 command_runner.go:130] > Access: 2024-02-29 02:18:34.658282101 +0000
	I0229 02:18:34.586284    8584 command_runner.go:130] > Modify: 2024-02-29 02:18:34.658282101 +0000
	I0229 02:18:34.586284    8584 command_runner.go:130] > Change: 2024-02-29 02:18:34.662282244 +0000
	I0229 02:18:34.586356    8584 command_runner.go:130] >  Birth: -
	I0229 02:18:34.586415    8584 start.go:543] Will wait 60s for crictl version
	I0229 02:18:34.594891    8584 ssh_runner.go:195] Run: which crictl
	I0229 02:18:34.600806    8584 command_runner.go:130] > /usr/bin/crictl
	I0229 02:18:34.613152    8584 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 02:18:34.683047    8584 command_runner.go:130] > Version:  0.1.0
	I0229 02:18:34.683047    8584 command_runner.go:130] > RuntimeName:  docker
	I0229 02:18:34.683047    8584 command_runner.go:130] > RuntimeVersion:  24.0.7
	I0229 02:18:34.683047    8584 command_runner.go:130] > RuntimeApiVersion:  v1
	I0229 02:18:34.683047    8584 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0229 02:18:34.690707    8584 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 02:18:34.727739    8584 command_runner.go:130] > 24.0.7
	I0229 02:18:34.736706    8584 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 02:18:34.772261    8584 command_runner.go:130] > 24.0.7
	I0229 02:18:34.773681    8584 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0229 02:18:34.774281    8584 out.go:177]   - env NO_PROXY=172.19.2.165
	I0229 02:18:34.775285    8584 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0229 02:18:34.778553    8584 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0229 02:18:34.779106    8584 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0229 02:18:34.779106    8584 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0229 02:18:34.779106    8584 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:a6:a3:c1 Flags:up|broadcast|multicast|running}
	I0229 02:18:34.782065    8584 ip.go:210] interface addr: fe80::fc78:4865:5cac:d448/64
	I0229 02:18:34.782065    8584 ip.go:210] interface addr: 172.19.0.1/20
	I0229 02:18:34.790491    8584 ssh_runner.go:195] Run: grep 172.19.0.1	host.minikube.internal$ /etc/hosts
	I0229 02:18:34.796849    8584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:18:34.818492    8584 certs.go:56] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500 for IP: 172.19.5.202
	I0229 02:18:34.818492    8584 certs.go:190] acquiring lock for shared ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:18:34.818492    8584 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0229 02:18:34.818492    8584 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0229 02:18:34.819491    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0229 02:18:34.819491    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0229 02:18:34.819491    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0229 02:18:34.819491    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0229 02:18:34.819491    8584 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3312.pem (1338 bytes)
	W0229 02:18:34.820491    8584 certs.go:433] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3312_empty.pem, impossibly tiny 0 bytes
	I0229 02:18:34.820491    8584 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0229 02:18:34.820491    8584 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0229 02:18:34.820491    8584 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0229 02:18:34.820491    8584 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0229 02:18:34.821487    8584 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem (1708 bytes)
	I0229 02:18:34.821487    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:18:34.821487    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3312.pem -> /usr/share/ca-certificates/3312.pem
	I0229 02:18:34.821487    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem -> /usr/share/ca-certificates/33122.pem
	I0229 02:18:34.822487    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 02:18:34.868245    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 02:18:34.918714    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 02:18:34.967307    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 02:18:35.017796    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 02:18:35.066669    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3312.pem --> /usr/share/ca-certificates/3312.pem (1338 bytes)
	I0229 02:18:35.114276    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem --> /usr/share/ca-certificates/33122.pem (1708 bytes)
	I0229 02:18:35.168006    8584 ssh_runner.go:195] Run: openssl version
	I0229 02:18:35.176800    8584 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0229 02:18:35.185691    8584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 02:18:35.215735    8584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:18:35.222256    8584 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 29 00:45 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:18:35.222256    8584 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 00:45 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:18:35.230885    8584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:18:35.240332    8584 command_runner.go:130] > b5213941
	I0229 02:18:35.249159    8584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 02:18:35.281031    8584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3312.pem && ln -fs /usr/share/ca-certificates/3312.pem /etc/ssl/certs/3312.pem"
	I0229 02:18:35.309172    8584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3312.pem
	I0229 02:18:35.315998    8584 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 29 00:59 /usr/share/ca-certificates/3312.pem
	I0229 02:18:35.315998    8584 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 00:59 /usr/share/ca-certificates/3312.pem
	I0229 02:18:35.326720    8584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3312.pem
	I0229 02:18:35.335106    8584 command_runner.go:130] > 51391683
	I0229 02:18:35.344025    8584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3312.pem /etc/ssl/certs/51391683.0"
	I0229 02:18:35.372591    8584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/33122.pem && ln -fs /usr/share/ca-certificates/33122.pem /etc/ssl/certs/33122.pem"
	I0229 02:18:35.406771    8584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/33122.pem
	I0229 02:18:35.415262    8584 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 29 00:59 /usr/share/ca-certificates/33122.pem
	I0229 02:18:35.415680    8584 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 00:59 /usr/share/ca-certificates/33122.pem
	I0229 02:18:35.425523    8584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/33122.pem
	I0229 02:18:35.433811    8584 command_runner.go:130] > 3ec20f2e
	I0229 02:18:35.445146    8584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/33122.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 02:18:35.475114    8584 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 02:18:35.481743    8584 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 02:18:35.482501    8584 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 02:18:35.489621    8584 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0229 02:18:35.524210    8584 command_runner.go:130] > cgroupfs
	I0229 02:18:35.524318    8584 cni.go:84] Creating CNI manager for ""
	I0229 02:18:35.524318    8584 cni.go:136] 2 nodes found, recommending kindnet
	I0229 02:18:35.524318    8584 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 02:18:35.524429    8584 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.5.202 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-314500 NodeName:multinode-314500-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.2.165"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.5.202 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 02:18:35.524626    8584 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.5.202
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-314500-m02"
	  kubeletExtraArgs:
	    node-ip: 172.19.5.202
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.2.165"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 02:18:35.524738    8584 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-314500-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.5.202
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-314500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 02:18:35.533460    8584 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 02:18:35.552711    8584 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	I0229 02:18:35.552711    8584 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0229 02:18:35.561470    8584 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0229 02:18:35.584271    8584 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet
	I0229 02:18:35.584271    8584 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm
	I0229 02:18:35.584271    8584 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl
	I0229 02:18:36.998042    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0229 02:18:37.009077    8584 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0229 02:18:37.017133    8584 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0229 02:18:37.017341    8584 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0229 02:18:37.017341    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0229 02:18:40.084939    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0229 02:18:40.095940    8584 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0229 02:18:40.104473    8584 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0229 02:18:40.104473    8584 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0229 02:18:40.104473    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0229 02:18:45.263699    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:18:45.287939    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0229 02:18:45.299336    8584 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0229 02:18:45.305390    8584 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0229 02:18:45.305390    8584 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0229 02:18:45.305390    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0229 02:18:45.925172    8584 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0229 02:18:45.944660    8584 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I0229 02:18:45.978335    8584 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 02:18:46.017572    8584 ssh_runner.go:195] Run: grep 172.19.2.165	control-plane.minikube.internal$ /etc/hosts
	I0229 02:18:46.024303    8584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.2.165	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:18:46.045317    8584 host.go:66] Checking if "multinode-314500" exists ...
	I0229 02:18:46.045993    8584 config.go:182] Loaded profile config "multinode-314500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 02:18:46.045993    8584 start.go:304] JoinCluster: &{Name:multinode-314500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-314500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.19.2.165 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.5.202 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:18:46.046193    8584 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0229 02:18:46.046251    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:18:48.030615    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:18:48.030615    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:18:48.030726    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:18:50.433720    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:18:50.434239    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:18:50.434239    8584 sshutil.go:53] new ssh client: &{IP:172.19.2.165 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\id_rsa Username:docker}
	I0229 02:18:50.638259    8584 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token o9oq2m.h2bk0u2kuwdvt40c --discovery-token-ca-cert-hash sha256:9c722bf1323b6c4442b9327af3863f0d7e41785d89e27c3b473d4929b028e022 
	I0229 02:18:50.638259    8584 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (4.5918101s)
	I0229 02:18:50.638259    8584 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.19.5.202 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0229 02:18:50.638259    8584 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token o9oq2m.h2bk0u2kuwdvt40c --discovery-token-ca-cert-hash sha256:9c722bf1323b6c4442b9327af3863f0d7e41785d89e27c3b473d4929b028e022 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-314500-m02"
	I0229 02:18:50.699991    8584 command_runner.go:130] ! W0229 02:18:50.872733    1324 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0229 02:18:50.889853    8584 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:18:53.684561    8584 command_runner.go:130] > [preflight] Running pre-flight checks
	I0229 02:18:53.684561    8584 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0229 02:18:53.684561    8584 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0229 02:18:53.684561    8584 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:18:53.684561    8584 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:18:53.684561    8584 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0229 02:18:53.684715    8584 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0229 02:18:53.684715    8584 command_runner.go:130] > This node has joined the cluster:
	I0229 02:18:53.684715    8584 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0229 02:18:53.684715    8584 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0229 02:18:53.684715    8584 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0229 02:18:53.684802    8584 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token o9oq2m.h2bk0u2kuwdvt40c --discovery-token-ca-cert-hash sha256:9c722bf1323b6c4442b9327af3863f0d7e41785d89e27c3b473d4929b028e022 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-314500-m02": (3.0463738s)
	I0229 02:18:53.684802    8584 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0229 02:18:53.931915    8584 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0229 02:18:54.149000    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61 minikube.k8s.io/name=multinode-314500 minikube.k8s.io/updated_at=2024_02_29T02_18_54_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:54.276936    8584 command_runner.go:130] > node/multinode-314500-m02 labeled
	I0229 02:18:54.276936    8584 start.go:306] JoinCluster complete in 8.2304841s
	I0229 02:18:54.277943    8584 cni.go:84] Creating CNI manager for ""
	I0229 02:18:54.277943    8584 cni.go:136] 2 nodes found, recommending kindnet
	I0229 02:18:54.287322    8584 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0229 02:18:54.295314    8584 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0229 02:18:54.295314    8584 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0229 02:18:54.295314    8584 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0229 02:18:54.295430    8584 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0229 02:18:54.295430    8584 command_runner.go:130] > Access: 2024-02-29 02:14:07.987005400 +0000
	I0229 02:18:54.295430    8584 command_runner.go:130] > Modify: 2024-02-23 03:39:37.000000000 +0000
	I0229 02:18:54.295430    8584 command_runner.go:130] > Change: 2024-02-29 02:13:59.368000000 +0000
	I0229 02:18:54.295430    8584 command_runner.go:130] >  Birth: -
	I0229 02:18:54.295529    8584 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0229 02:18:54.295574    8584 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0229 02:18:54.339530    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0229 02:18:54.828066    8584 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0229 02:18:54.828174    8584 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0229 02:18:54.828174    8584 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0229 02:18:54.828174    8584 command_runner.go:130] > daemonset.apps/kindnet configured
	I0229 02:18:54.829484    8584 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 02:18:54.830286    8584 kapi.go:59] client config for multinode-314500: &rest.Config{Host:"https://172.19.2.165:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2480600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 02:18:54.831290    8584 round_trippers.go:463] GET https://172.19.2.165:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0229 02:18:54.831290    8584 round_trippers.go:469] Request Headers:
	I0229 02:18:54.831374    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:18:54.831374    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:18:54.847724    8584 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0229 02:18:54.847724    8584 round_trippers.go:577] Response Headers:
	I0229 02:18:54.847724    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:18:54.847724    8584 round_trippers.go:580]     Content-Length: 291
	I0229 02:18:54.847724    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:18:55 GMT
	I0229 02:18:54.847724    8584 round_trippers.go:580]     Audit-Id: e12071b6-30c0-4d6d-9023-573b3f854ed4
	I0229 02:18:54.847724    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:18:54.847724    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:18:54.847724    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:18:54.848623    8584 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b4cd7015-a823-43da-bf82-ae91c5678262","resourceVersion":"439","creationTimestamp":"2024-02-29T02:15:51Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0229 02:18:54.848743    8584 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-314500" context rescaled to 1 replicas
	I0229 02:18:54.848818    8584 start.go:223] Will wait 6m0s for node &{Name:m02 IP:172.19.5.202 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0229 02:18:54.849622    8584 out.go:177] * Verifying Kubernetes components...
	I0229 02:18:54.859551    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:18:54.884779    8584 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 02:18:54.885357    8584 kapi.go:59] client config for multinode-314500: &rest.Config{Host:"https://172.19.2.165:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2480600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 02:18:54.886093    8584 node_ready.go:35] waiting up to 6m0s for node "multinode-314500-m02" to be "Ready" ...
	I0229 02:18:54.886178    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:18:54.886178    8584 round_trippers.go:469] Request Headers:
	I0229 02:18:54.886263    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:18:54.886292    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:18:54.889540    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:18:54.889540    8584 round_trippers.go:577] Response Headers:
	I0229 02:18:54.889540    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:18:54.889540    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:18:54.889540    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:18:54.889540    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:18:55 GMT
	I0229 02:18:54.889540    8584 round_trippers.go:580]     Audit-Id: 16a67bb6-f9fa-47dc-9acc-fded8dd1ddf0
	I0229 02:18:54.889540    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:18:54.890077    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:18:55.391661    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:18:55.391763    8584 round_trippers.go:469] Request Headers:
	I0229 02:18:55.391763    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:18:55.391763    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:18:55.397889    8584 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 02:18:55.397956    8584 round_trippers.go:577] Response Headers:
	I0229 02:18:55.397956    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:18:55.397956    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:18:55.398023    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:18:55.398023    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:18:55.398023    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:18:55 GMT
	I0229 02:18:55.398023    8584 round_trippers.go:580]     Audit-Id: 76e07a31-ea9d-45a0-bac4-b0a49382c981
	I0229 02:18:55.398637    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:18:55.894750    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:18:55.894865    8584 round_trippers.go:469] Request Headers:
	I0229 02:18:55.894865    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:18:55.894865    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:18:55.898265    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:18:55.898265    8584 round_trippers.go:577] Response Headers:
	I0229 02:18:55.898265    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:18:55.898265    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:18:55.898265    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:18:55.898265    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:18:56 GMT
	I0229 02:18:55.898265    8584 round_trippers.go:580]     Audit-Id: db33e390-9484-47f5-9023-d4f5140c6a73
	I0229 02:18:55.898265    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:18:55.899762    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:18:56.397336    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:18:56.397336    8584 round_trippers.go:469] Request Headers:
	I0229 02:18:56.397336    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:18:56.397336    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:18:56.400945    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:18:56.400945    8584 round_trippers.go:577] Response Headers:
	I0229 02:18:56.400945    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:18:56.401544    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:18:56.401544    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:18:56 GMT
	I0229 02:18:56.401544    8584 round_trippers.go:580]     Audit-Id: 7b5663c7-4127-436f-a916-f944f1a9362c
	I0229 02:18:56.401544    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:18:56.401544    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:18:56.401804    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:18:56.899952    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:18:56.899952    8584 round_trippers.go:469] Request Headers:
	I0229 02:18:56.899952    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:18:56.899952    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:18:56.913982    8584 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0229 02:18:56.913982    8584 round_trippers.go:577] Response Headers:
	I0229 02:18:56.913982    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:18:56.913982    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:18:56.913982    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:18:56.913982    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:18:56.913982    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:18:57 GMT
	I0229 02:18:56.913982    8584 round_trippers.go:580]     Audit-Id: 8cfef47d-31b6-4936-8599-942d267d5c62
	I0229 02:18:56.916795    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:18:56.917437    8584 node_ready.go:58] node "multinode-314500-m02" has status "Ready":"False"
	I0229 02:18:57.388540    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:18:57.388540    8584 round_trippers.go:469] Request Headers:
	I0229 02:18:57.388540    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:18:57.388540    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:18:57.392537    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:18:57.392537    8584 round_trippers.go:577] Response Headers:
	I0229 02:18:57.392537    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:18:57.392537    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:18:57 GMT
	I0229 02:18:57.392537    8584 round_trippers.go:580]     Audit-Id: 9689fe16-0d2b-45b2-bb7b-66bf24615cf8
	I0229 02:18:57.392537    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:18:57.392537    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:18:57.392537    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:18:57.392737    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:18:57.905825    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:18:57.905825    8584 round_trippers.go:469] Request Headers:
	I0229 02:18:57.905825    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:18:57.905825    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:18:57.909488    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:18:57.909488    8584 round_trippers.go:577] Response Headers:
	I0229 02:18:57.909488    8584 round_trippers.go:580]     Audit-Id: 93cc2139-334a-44b0-a008-1bab083e526a
	I0229 02:18:57.910054    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:18:57.910054    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:18:57.910054    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:18:57.910054    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:18:57.910054    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:18:58 GMT
	I0229 02:18:57.910054    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:18:58.400349    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:18:58.400349    8584 round_trippers.go:469] Request Headers:
	I0229 02:18:58.400349    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:18:58.400349    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:18:58.404938    8584 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:18:58.404938    8584 round_trippers.go:577] Response Headers:
	I0229 02:18:58.404938    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:18:58.404938    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:18:58.404938    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:18:58 GMT
	I0229 02:18:58.404938    8584 round_trippers.go:580]     Audit-Id: 6e7258bb-b00b-4e60-87e5-7b6336f44acf
	I0229 02:18:58.405337    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:18:58.405337    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:18:58.406994    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:18:58.888065    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:18:58.888104    8584 round_trippers.go:469] Request Headers:
	I0229 02:18:58.888154    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:18:58.888154    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:18:58.892109    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:18:58.892515    8584 round_trippers.go:577] Response Headers:
	I0229 02:18:58.892515    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:18:58.892515    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:18:58.892515    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:18:58.892515    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:18:58.892515    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:18:59 GMT
	I0229 02:18:58.892515    8584 round_trippers.go:580]     Audit-Id: a77bafa9-ce1a-4082-a191-10262cf4fc99
	I0229 02:18:58.892786    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:18:59.391822    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:18:59.391822    8584 round_trippers.go:469] Request Headers:
	I0229 02:18:59.391822    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:18:59.391822    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:18:59.397773    8584 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 02:18:59.397840    8584 round_trippers.go:577] Response Headers:
	I0229 02:18:59.397878    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:18:59.397878    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:18:59.397878    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:18:59.397878    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:18:59.397878    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:18:59 GMT
	I0229 02:18:59.397878    8584 round_trippers.go:580]     Audit-Id: f98af41c-d5cf-447b-97f9-e89ff1495066
	I0229 02:18:59.398819    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:18:59.399208    8584 node_ready.go:58] node "multinode-314500-m02" has status "Ready":"False"
	I0229 02:18:59.899172    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:18:59.899172    8584 round_trippers.go:469] Request Headers:
	I0229 02:18:59.899241    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:18:59.899241    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:18:59.902652    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:18:59.902652    8584 round_trippers.go:577] Response Headers:
	I0229 02:18:59.902652    8584 round_trippers.go:580]     Audit-Id: 5f01caf7-30bf-495c-889c-847503d5df90
	I0229 02:18:59.902652    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:18:59.902652    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:18:59.902652    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:18:59.902652    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:18:59.902652    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:00 GMT
	I0229 02:18:59.903665    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:19:00.389363    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:00.389363    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:00.389363    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:00.389447    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:00.393244    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:19:00.393502    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:00.393502    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:00 GMT
	I0229 02:19:00.393502    8584 round_trippers.go:580]     Audit-Id: c46b7762-54e7-4b1c-bff0-200199beca33
	I0229 02:19:00.393502    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:00.393502    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:00.393502    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:00.393502    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:00.393735    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:19:00.896187    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:00.896187    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:00.896270    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:00.896270    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:00.906719    8584 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0229 02:19:00.906719    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:00.906719    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:01 GMT
	I0229 02:19:00.906719    8584 round_trippers.go:580]     Audit-Id: 51e07ad7-2bc2-406a-a4af-4f3e1efa975e
	I0229 02:19:00.906719    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:00.906719    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:00.906719    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:00.906719    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:00.906719    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:19:01.387637    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:01.387637    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:01.387637    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:01.387637    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:01.428791    8584 round_trippers.go:574] Response Status: 200 OK in 41 milliseconds
	I0229 02:19:01.429599    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:01.429599    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:01.429599    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:01.429599    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:01.429599    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:01 GMT
	I0229 02:19:01.429599    8584 round_trippers.go:580]     Audit-Id: 119db968-13f7-4535-8658-337189a296ea
	I0229 02:19:01.429599    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:01.430142    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:19:01.430583    8584 node_ready.go:58] node "multinode-314500-m02" has status "Ready":"False"
	I0229 02:19:01.888493    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:01.888493    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:01.888493    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:01.888493    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:01.891732    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:19:01.891732    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:01.891732    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:01.891732    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:01.891732    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:02 GMT
	I0229 02:19:01.891732    8584 round_trippers.go:580]     Audit-Id: 42832318-f25b-490f-aff7-877895b7a3ba
	I0229 02:19:01.892570    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:01.892570    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:01.892677    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:19:02.396657    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:02.396657    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:02.396657    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:02.396657    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:02.399223    8584 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:19:02.399223    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:02.399223    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:02.399223    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:02.399223    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:02.399223    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:02.399223    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:02 GMT
	I0229 02:19:02.399223    8584 round_trippers.go:580]     Audit-Id: 35d2dceb-2382-4616-b8e5-6a0d14e043ab
	I0229 02:19:02.400063    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:19:02.900535    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:02.900535    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:02.900535    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:02.900535    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:02.905068    8584 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:19:02.905068    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:02.905068    8584 round_trippers.go:580]     Audit-Id: ace017cd-ee9f-4bd0-9b52-397013c1b792
	I0229 02:19:02.905068    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:02.905068    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:02.905068    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:02.905068    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:02.905068    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:03 GMT
	I0229 02:19:02.905391    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:19:03.394230    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:03.394230    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:03.394230    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:03.394230    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:03.396650    8584 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:19:03.396650    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:03.396650    8584 round_trippers.go:580]     Audit-Id: 5481e7b5-4a4c-446d-a04a-bc2f56d87626
	I0229 02:19:03.396650    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:03.396650    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:03.396650    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:03.396650    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:03.396650    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:03 GMT
	I0229 02:19:03.397840    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:19:03.886639    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:03.886639    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:03.886639    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:03.886639    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:03.890655    8584 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:19:03.890716    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:03.890716    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:03.890716    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:03.890784    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:03.890784    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:03.890784    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:04 GMT
	I0229 02:19:03.890784    8584 round_trippers.go:580]     Audit-Id: e654a293-e86e-4326-8709-9c556c1b6a16
	I0229 02:19:03.890957    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"610","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0229 02:19:03.891302    8584 node_ready.go:58] node "multinode-314500-m02" has status "Ready":"False"
	I0229 02:19:04.395161    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:04.395161    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:04.395161    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:04.395161    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:04.398988    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:19:04.398988    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:04.398988    8584 round_trippers.go:580]     Audit-Id: 5bd0cf7c-4754-40ca-abc1-50d4188e1af1
	I0229 02:19:04.398988    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:04.398988    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:04.398988    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:04.399337    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:04.399337    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:04 GMT
	I0229 02:19:04.399498    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"610","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0229 02:19:04.900506    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:04.900506    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:04.900588    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:04.900588    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:04.904345    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:19:04.904345    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:04.904345    8584 round_trippers.go:580]     Audit-Id: ea6f6d91-b34a-498d-9365-83f52c171ba8
	I0229 02:19:04.904345    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:04.904345    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:04.904345    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:04.904345    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:04.904345    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:05 GMT
	I0229 02:19:04.905267    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"610","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0229 02:19:05.390945    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:05.391025    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:05.391025    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:05.391025    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:05.394999    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:19:05.395256    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:05.395256    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:05.395256    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:05 GMT
	I0229 02:19:05.395256    8584 round_trippers.go:580]     Audit-Id: 48db6fca-fdd8-4b8e-8acf-d8508f01bc99
	I0229 02:19:05.395256    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:05.395256    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:05.395256    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:05.395433    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"610","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0229 02:19:05.897185    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:05.897253    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:05.897253    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:05.897253    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:05.901327    8584 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:19:05.901327    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:05.901327    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:05.901327    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:05.901327    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:06 GMT
	I0229 02:19:05.901327    8584 round_trippers.go:580]     Audit-Id: cc504900-e223-4f88-81bf-24d20ae238cd
	I0229 02:19:05.901327    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:05.901327    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:05.901610    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"610","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0229 02:19:05.901610    8584 node_ready.go:58] node "multinode-314500-m02" has status "Ready":"False"
	I0229 02:19:06.399376    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:06.399376    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:06.399445    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:06.399445    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:06.402595    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:19:06.402595    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:06.402595    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:06.402595    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:06.402595    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:06.402595    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:06 GMT
	I0229 02:19:06.402595    8584 round_trippers.go:580]     Audit-Id: 9f0e2a8e-137c-4cc5-9263-1f23093b3170
	I0229 02:19:06.402595    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:06.403455    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"610","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0229 02:19:06.899253    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:06.899323    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:06.899323    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:06.899323    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:06.903424    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:19:06.903424    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:06.903485    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:06.903485    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:06.903485    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:06.903485    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:07 GMT
	I0229 02:19:06.903485    8584 round_trippers.go:580]     Audit-Id: 70639d9f-98b5-4954-9cc2-ddac86c9913d
	I0229 02:19:06.903485    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:06.903620    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"610","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0229 02:19:07.401908    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:07.401994    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:07.402081    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:07.402081    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:07.405358    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:19:07.405358    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:07.405358    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:07.405358    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:07.405358    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:07 GMT
	I0229 02:19:07.406237    8584 round_trippers.go:580]     Audit-Id: 76fc92ca-3360-4c4e-bd5f-1f7bf5cc52d9
	I0229 02:19:07.406237    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:07.406237    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:07.406494    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"610","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0229 02:19:07.888332    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:07.888410    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:07.888410    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:07.888410    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:07.894132    8584 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 02:19:07.894651    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:07.894736    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:08 GMT
	I0229 02:19:07.894736    8584 round_trippers.go:580]     Audit-Id: 1c3c1ce0-0769-425d-afd3-d1bd32756322
	I0229 02:19:07.894736    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:07.894736    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:07.894736    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:07.894736    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:07.894736    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"610","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0229 02:19:08.389430    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:08.389523    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:08.389523    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:08.389523    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:08.392857    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:19:08.392857    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:08.392857    8584 round_trippers.go:580]     Audit-Id: b2f601d5-c1ee-47a1-b56e-755a0c4ad649
	I0229 02:19:08.393710    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:08.393710    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:08.393710    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:08.393710    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:08.393710    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:08 GMT
	I0229 02:19:08.393710    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"610","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0229 02:19:08.393710    8584 node_ready.go:58] node "multinode-314500-m02" has status "Ready":"False"
	I0229 02:19:08.887326    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:08.887326    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:08.887326    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:08.887326    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:08.891027    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:19:08.891027    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:08.891027    8584 round_trippers.go:580]     Audit-Id: 0f021001-a406-44d6-94d8-93ef736fbe42
	I0229 02:19:08.891670    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:08.891670    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:08.891670    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:08.891670    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:08.891670    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:09 GMT
	I0229 02:19:08.892089    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"610","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0229 02:19:09.389425    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:09.389425    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:09.389425    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:09.389425    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:09.396421    8584 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 02:19:09.396421    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:09.396421    8584 round_trippers.go:580]     Audit-Id: 5fbac51a-70b7-4815-bf98-6c7af5b38950
	I0229 02:19:09.396421    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:09.396421    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:09.396421    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:09.396421    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:09.396421    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:09 GMT
	I0229 02:19:09.396421    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"610","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0229 02:19:09.894460    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:09.894728    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:09.894728    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:09.894728    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:09.898034    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:19:09.898034    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:09.898864    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:10 GMT
	I0229 02:19:09.898864    8584 round_trippers.go:580]     Audit-Id: 30013302-3f77-4414-bdaf-b073ae7cc7ad
	I0229 02:19:09.898864    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:09.898864    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:09.898864    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:09.898864    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:09.899055    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"622","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3255 chars]
	I0229 02:19:09.899681    8584 node_ready.go:49] node "multinode-314500-m02" has status "Ready":"True"
	I0229 02:19:09.899760    8584 node_ready.go:38] duration metric: took 15.0128311s waiting for node "multinode-314500-m02" to be "Ready" ...
	I0229 02:19:09.899760    8584 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:19:09.899988    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods
	I0229 02:19:09.899988    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:09.900078    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:09.900078    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:09.906930    8584 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 02:19:09.906930    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:09.906930    8584 round_trippers.go:580]     Audit-Id: 89da8e7e-82dd-4ddb-8b70-e96b345eeabf
	I0229 02:19:09.906930    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:09.906930    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:09.907383    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:09.907383    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:09.907383    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:10 GMT
	I0229 02:19:09.908247    8584 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"622"},"items":[{"metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"435","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67426 chars]
	I0229 02:19:09.910949    8584 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace to be "Ready" ...
	I0229 02:19:09.911270    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:19:09.911270    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:09.911270    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:09.911270    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:09.913489    8584 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:19:09.913489    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:09.914473    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:10 GMT
	I0229 02:19:09.914473    8584 round_trippers.go:580]     Audit-Id: 7bc2e034-8bca-4f19-a593-29d856effd79
	I0229 02:19:09.914473    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:09.914473    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:09.914473    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:09.914473    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:09.914473    8584 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"435","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6282 chars]
	I0229 02:19:09.914473    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:19:09.915219    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:09.915219    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:09.915219    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:09.917425    8584 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:19:09.917425    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:09.918440    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:09.918440    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:10 GMT
	I0229 02:19:09.918440    8584 round_trippers.go:580]     Audit-Id: 4809d0f9-91ec-4b02-b3ae-312c0e7cd898
	I0229 02:19:09.918440    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:09.918440    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:09.918440    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:09.918977    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"445","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4957 chars]
	I0229 02:19:09.919175    8584 pod_ready.go:92] pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace has status "Ready":"True"
	I0229 02:19:09.919175    8584 pod_ready.go:81] duration metric: took 7.9754ms waiting for pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace to be "Ready" ...
	I0229 02:19:09.919175    8584 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:19:09.919175    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-314500
	I0229 02:19:09.919700    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:09.919700    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:09.919700    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:09.921900    8584 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:19:09.922797    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:09.922797    8584 round_trippers.go:580]     Audit-Id: 99d5dfdd-529d-414a-bbab-ec3564725035
	I0229 02:19:09.922797    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:09.922797    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:09.922797    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:09.922797    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:09.922869    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:10 GMT
	I0229 02:19:09.922990    8584 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-314500","namespace":"kube-system","uid":"6fc42e7c-48f9-46df-bf2f-861e0684e37f","resourceVersion":"323","creationTimestamp":"2024-02-29T02:15:52Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.2.165:2379","kubernetes.io/config.hash":"0b84e88097a2b59a9c108b0f9fa2b889","kubernetes.io/config.mirror":"0b84e88097a2b59a9c108b0f9fa2b889","kubernetes.io/config.seen":"2024-02-29T02:15:52.221392786Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:15:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5852 chars]
	I0229 02:19:09.923537    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:19:09.923537    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:09.923537    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:09.923537    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:09.926234    8584 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:19:09.926984    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:09.926984    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:10 GMT
	I0229 02:19:09.926984    8584 round_trippers.go:580]     Audit-Id: df2638c9-ac54-4653-bb22-db74ffa3024c
	I0229 02:19:09.926984    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:09.926984    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:09.926984    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:09.926984    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:09.927160    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"445","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4957 chars]
	I0229 02:19:09.927439    8584 pod_ready.go:92] pod "etcd-multinode-314500" in "kube-system" namespace has status "Ready":"True"
	I0229 02:19:09.927439    8584 pod_ready.go:81] duration metric: took 8.2637ms waiting for pod "etcd-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:19:09.927439    8584 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:19:09.927439    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-314500
	I0229 02:19:09.927439    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:09.927439    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:09.927439    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:09.930125    8584 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:19:09.930125    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:09.930125    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:10 GMT
	I0229 02:19:09.930125    8584 round_trippers.go:580]     Audit-Id: 7d1cb678-5653-4d94-81c2-91c8fa733734
	I0229 02:19:09.930125    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:09.930125    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:09.930125    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:09.930125    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:09.931265    8584 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-314500","namespace":"kube-system","uid":"fc266082-ff2c-4bd1-951f-11dc765a28ae","resourceVersion":"303","creationTimestamp":"2024-02-29T02:15:52Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.2.165:8443","kubernetes.io/config.hash":"75abc10fab898952206cc1d682d3c922","kubernetes.io/config.mirror":"75abc10fab898952206cc1d682d3c922","kubernetes.io/config.seen":"2024-02-29T02:15:52.221397486Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:15:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7390 chars]
	I0229 02:19:09.931368    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:19:09.931368    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:09.931368    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:09.931368    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:09.933978    8584 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:19:09.933978    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:09.933978    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:10 GMT
	I0229 02:19:09.934688    8584 round_trippers.go:580]     Audit-Id: d82b569f-a41c-4dec-b10e-f07a48060338
	I0229 02:19:09.934688    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:09.934688    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:09.934688    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:09.934688    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:09.934688    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"445","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4957 chars]
	I0229 02:19:09.935545    8584 pod_ready.go:92] pod "kube-apiserver-multinode-314500" in "kube-system" namespace has status "Ready":"True"
	I0229 02:19:09.935545    8584 pod_ready.go:81] duration metric: took 8.1061ms waiting for pod "kube-apiserver-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:19:09.935605    8584 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:19:09.935677    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-314500
	I0229 02:19:09.935677    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:09.935677    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:09.935677    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:09.938290    8584 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:19:09.938290    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:09.938290    8584 round_trippers.go:580]     Audit-Id: f1c6fb4d-9811-4d1e-b351-72c1daa1ec71
	I0229 02:19:09.938290    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:09.938290    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:09.938290    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:09.938290    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:09.938290    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:10 GMT
	I0229 02:19:09.938290    8584 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-314500","namespace":"kube-system","uid":"58e57902-e113-44a9-b5b5-4aba2ba13491","resourceVersion":"302","creationTimestamp":"2024-02-29T02:15:52Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"46f4a0cce9ca64e19c1ad09d6f30ce1e","kubernetes.io/config.mirror":"46f4a0cce9ca64e19c1ad09d6f30ce1e","kubernetes.io/config.seen":"2024-02-29T02:15:52.221398986Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:15:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6965 chars]
	I0229 02:19:09.939348    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:19:09.939348    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:09.939348    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:09.939348    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:09.943696    8584 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:19:09.943696    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:09.943696    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:10 GMT
	I0229 02:19:09.943696    8584 round_trippers.go:580]     Audit-Id: 8b6a9ffa-4316-4827-a442-9ff4f30d586a
	I0229 02:19:09.943696    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:09.943918    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:09.943918    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:09.943918    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:09.944022    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"445","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4957 chars]
	I0229 02:19:09.944022    8584 pod_ready.go:92] pod "kube-controller-manager-multinode-314500" in "kube-system" namespace has status "Ready":"True"
	I0229 02:19:09.944022    8584 pod_ready.go:81] duration metric: took 8.417ms waiting for pod "kube-controller-manager-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:19:09.944022    8584 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4gbrl" in "kube-system" namespace to be "Ready" ...
	I0229 02:19:10.097935    8584 request.go:629] Waited for 152.8628ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4gbrl
	I0229 02:19:10.098174    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4gbrl
	I0229 02:19:10.098219    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:10.098219    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:10.098219    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:10.104877    8584 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 02:19:10.104877    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:10.104877    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:10.104877    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:10.104877    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:10.104877    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:10.104877    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:10 GMT
	I0229 02:19:10.104877    8584 round_trippers.go:580]     Audit-Id: 9eb11c5e-881c-42bc-9be1-5f24ca6abc36
	I0229 02:19:10.105667    8584 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4gbrl","generateName":"kube-proxy-","namespace":"kube-system","uid":"accb56cb-79ee-4f16-b05e-91bf554c4a60","resourceVersion":"606","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"99934fe5-0d72-4e83-8f59-4a0b59969008","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"99934fe5-0d72-4e83-8f59-4a0b59969008\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I0229 02:19:10.301343    8584 request.go:629] Waited for 194.8528ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:10.301407    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:10.301407    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:10.301407    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:10.301407    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:10.304982    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:19:10.304982    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:10.304982    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:10.304982    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:10.305600    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:10.305600    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:10 GMT
	I0229 02:19:10.305600    8584 round_trippers.go:580]     Audit-Id: c18086a1-3697-45c4-8944-d8d7689207d6
	I0229 02:19:10.305600    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:10.305690    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"622","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3255 chars]
	I0229 02:19:10.306238    8584 pod_ready.go:92] pod "kube-proxy-4gbrl" in "kube-system" namespace has status "Ready":"True"
	I0229 02:19:10.306384    8584 pod_ready.go:81] duration metric: took 362.2941ms waiting for pod "kube-proxy-4gbrl" in "kube-system" namespace to be "Ready" ...
	I0229 02:19:10.306444    8584 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6r6j4" in "kube-system" namespace to be "Ready" ...
	I0229 02:19:10.504518    8584 request.go:629] Waited for 197.6938ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6r6j4
	I0229 02:19:10.504606    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6r6j4
	I0229 02:19:10.504682    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:10.504682    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:10.504682    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:10.511019    8584 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 02:19:10.511019    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:10.511019    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:10.511019    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:10.511019    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:10.511019    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:10.511019    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:10 GMT
	I0229 02:19:10.511019    8584 round_trippers.go:580]     Audit-Id: 4ae6576d-36bf-4327-85f5-11b14661f5ab
	I0229 02:19:10.511729    8584 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6r6j4","generateName":"kube-proxy-","namespace":"kube-system","uid":"2b84b22d-3786-4f9e-a23a-c7cfc93bb671","resourceVersion":"394","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"99934fe5-0d72-4e83-8f59-4a0b59969008","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"99934fe5-0d72-4e83-8f59-4a0b59969008\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
	I0229 02:19:10.706346    8584 request.go:629] Waited for 193.8669ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:19:10.706642    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:19:10.706642    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:10.706642    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:10.706642    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:10.712840    8584 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 02:19:10.712895    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:10.712978    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:10.713002    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:10 GMT
	I0229 02:19:10.713002    8584 round_trippers.go:580]     Audit-Id: c5866151-f886-44d1-8800-b5f13dbf5b70
	I0229 02:19:10.713002    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:10.713002    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:10.713002    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:10.713002    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"445","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4957 chars]
	I0229 02:19:10.713751    8584 pod_ready.go:92] pod "kube-proxy-6r6j4" in "kube-system" namespace has status "Ready":"True"
	I0229 02:19:10.713751    8584 pod_ready.go:81] duration metric: took 407.2841ms waiting for pod "kube-proxy-6r6j4" in "kube-system" namespace to be "Ready" ...
	I0229 02:19:10.713751    8584 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:19:10.908577    8584 request.go:629] Waited for 194.7255ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-314500
	I0229 02:19:10.908997    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-314500
	I0229 02:19:10.908997    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:10.908997    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:10.908997    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:10.912468    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:19:10.912468    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:10.912468    8584 round_trippers.go:580]     Audit-Id: a20a5e1a-e0b4-47eb-ab35-b1c357c97ae2
	I0229 02:19:10.912468    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:10.912468    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:10.912468    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:10.912468    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:10.912468    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:11 GMT
	I0229 02:19:10.913104    8584 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-314500","namespace":"kube-system","uid":"31fcecc6-17de-43a6-892d-37cd915de64b","resourceVersion":"288","creationTimestamp":"2024-02-29T02:15:52Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"3d9a79ff068a0922524863a8caa5053a","kubernetes.io/config.mirror":"3d9a79ff068a0922524863a8caa5053a","kubernetes.io/config.seen":"2024-02-29T02:15:52.221399886Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:15:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4695 chars]
	I0229 02:19:11.095871    8584 request.go:629] Waited for 181.8524ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:19:11.096146    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:19:11.096146    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:11.096146    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:11.096146    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:11.104050    8584 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 02:19:11.104316    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:11.104316    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:11 GMT
	I0229 02:19:11.104316    8584 round_trippers.go:580]     Audit-Id: 7e9bd965-e810-45bc-85a8-4bb609661efb
	I0229 02:19:11.104316    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:11.104316    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:11.104316    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:11.104368    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:11.104637    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"445","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4957 chars]
	I0229 02:19:11.105147    8584 pod_ready.go:92] pod "kube-scheduler-multinode-314500" in "kube-system" namespace has status "Ready":"True"
	I0229 02:19:11.105147    8584 pod_ready.go:81] duration metric: took 391.3742ms waiting for pod "kube-scheduler-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:19:11.105147    8584 pod_ready.go:38] duration metric: took 1.2053198s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:19:11.105147    8584 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 02:19:11.114287    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:19:11.138275    8584 system_svc.go:56] duration metric: took 33.1261ms WaitForService to wait for kubelet.
	I0229 02:19:11.138407    8584 kubeadm.go:581] duration metric: took 16.2886816s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 02:19:11.138478    8584 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:19:11.300588    8584 request.go:629] Waited for 161.8606ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.165:8443/api/v1/nodes
	I0229 02:19:11.300980    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes
	I0229 02:19:11.300980    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:11.300980    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:11.300980    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:11.304358    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:19:11.304358    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:11.304358    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:11 GMT
	I0229 02:19:11.304358    8584 round_trippers.go:580]     Audit-Id: 51c168c1-a4fe-434a-973b-2f988dadac6f
	I0229 02:19:11.304358    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:11.304358    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:11.304358    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:11.304358    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:11.305480    8584 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"624"},"items":[{"metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"445","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 9257 chars]
	I0229 02:19:11.306090    8584 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:19:11.306162    8584 node_conditions.go:123] node cpu capacity is 2
	I0229 02:19:11.306162    8584 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:19:11.306162    8584 node_conditions.go:123] node cpu capacity is 2
	I0229 02:19:11.306162    8584 node_conditions.go:105] duration metric: took 167.6741ms to run NodePressure ...
	I0229 02:19:11.306162    8584 start.go:228] waiting for startup goroutines ...
	I0229 02:19:11.306266    8584 start.go:242] writing updated cluster config ...
	I0229 02:19:11.315752    8584 ssh_runner.go:195] Run: rm -f paused
	I0229 02:19:11.444114    8584 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 02:19:11.444987    8584 out.go:177] * Done! kubectl is now configured to use "multinode-314500" cluster and "default" namespace by default
	
	
	==> Docker <==
	Feb 29 02:16:16 multinode-314500 dockerd[1292]: time="2024-02-29T02:16:16.836943598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 02:16:16 multinode-314500 dockerd[1292]: time="2024-02-29T02:16:16.844762626Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 02:16:16 multinode-314500 dockerd[1292]: time="2024-02-29T02:16:16.844839230Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 02:16:16 multinode-314500 dockerd[1292]: time="2024-02-29T02:16:16.844857831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 02:16:16 multinode-314500 dockerd[1292]: time="2024-02-29T02:16:16.845360758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 02:16:16 multinode-314500 cri-dockerd[1179]: time="2024-02-29T02:16:16Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/13f6ae46b7d00cb80295b3fe4d8eaa84529c5242f022e3b07bef994969a9441e/resolv.conf as [nameserver 172.19.0.1]"
	Feb 29 02:16:17 multinode-314500 cri-dockerd[1179]: time="2024-02-29T02:16:17Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8c944d91b62504f7fd894d21889df5d67be765e4f02c1950a7a2a05132205f99/resolv.conf as [nameserver 172.19.0.1]"
	Feb 29 02:16:17 multinode-314500 dockerd[1292]: time="2024-02-29T02:16:17.077064890Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 02:16:17 multinode-314500 dockerd[1292]: time="2024-02-29T02:16:17.077136794Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 02:16:17 multinode-314500 dockerd[1292]: time="2024-02-29T02:16:17.077154495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 02:16:17 multinode-314500 dockerd[1292]: time="2024-02-29T02:16:17.077248800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 02:16:17 multinode-314500 dockerd[1292]: time="2024-02-29T02:16:17.216491649Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 02:16:17 multinode-314500 dockerd[1292]: time="2024-02-29T02:16:17.216758964Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 02:16:17 multinode-314500 dockerd[1292]: time="2024-02-29T02:16:17.217093082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 02:16:17 multinode-314500 dockerd[1292]: time="2024-02-29T02:16:17.217451101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 02:19:35 multinode-314500 dockerd[1292]: time="2024-02-29T02:19:35.111682320Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 02:19:35 multinode-314500 dockerd[1292]: time="2024-02-29T02:19:35.112609163Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 02:19:35 multinode-314500 dockerd[1292]: time="2024-02-29T02:19:35.112830174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 02:19:35 multinode-314500 dockerd[1292]: time="2024-02-29T02:19:35.113067885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 02:19:35 multinode-314500 cri-dockerd[1179]: time="2024-02-29T02:19:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ffe504a01e326c3100f593c8c5221a31307571eedec738e86cb135ea892fdda2/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Feb 29 02:19:36 multinode-314500 cri-dockerd[1179]: time="2024-02-29T02:19:36Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Feb 29 02:19:36 multinode-314500 dockerd[1292]: time="2024-02-29T02:19:36.486937597Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 02:19:36 multinode-314500 dockerd[1292]: time="2024-02-29T02:19:36.487123907Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 02:19:36 multinode-314500 dockerd[1292]: time="2024-02-29T02:19:36.487169510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 02:19:36 multinode-314500 dockerd[1292]: time="2024-02-29T02:19:36.487422023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	56fdd268ee231       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   47 seconds ago      Running             busybox                   0                   ffe504a01e326       busybox-5b5d89c9d6-qcblm
	11c14ebdfaf67       ead0a4a53df89                                                                                         4 minutes ago       Running             coredns                   0                   8c944d91b6250       coredns-5dd5756b68-8g6tg
	cf65b06d29a0d       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       0                   13f6ae46b7d00       storage-provisioner
	dd61788b0a0d8       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              4 minutes ago       Running             kindnet-cni               0                   edb41bd5e75d4       kindnet-t9r77
	c93e331307466       83f6cc407eed8                                                                                         4 minutes ago       Running             kube-proxy                0                   4b10f8bd940b8       kube-proxy-6r6j4
	e5bc2b41493bf       73deb9a3f7025                                                                                         4 minutes ago       Running             etcd                      0                   b93004a3ca704       etcd-multinode-314500
	ab0c4864aee58       e3db313c6dbc0                                                                                         4 minutes ago       Running             kube-scheduler            0                   bf7b9750ae9ea       kube-scheduler-multinode-314500
	26b1ab05f99a9       d058aa5ab969c                                                                                         4 minutes ago       Running             kube-controller-manager   0                   96810146c69cf       kube-controller-manager-multinode-314500
	9815e253e1a06       7fe0e6f37db33                                                                                         4 minutes ago       Running             kube-apiserver            0                   2d13a46d83899       kube-apiserver-multinode-314500
	
	
	==> coredns [11c14ebdfaf6] <==
	[INFO] 10.244.1.2:39886 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00019781s
	[INFO] 10.244.0.3:51772 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000254814s
	[INFO] 10.244.0.3:55803 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000074704s
	[INFO] 10.244.0.3:52953 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000063204s
	[INFO] 10.244.0.3:35356 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000217512s
	[INFO] 10.244.0.3:51868 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000073604s
	[INFO] 10.244.0.3:43420 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000103505s
	[INFO] 10.244.0.3:51899 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000210611s
	[INFO] 10.244.0.3:56850 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00018761s
	[INFO] 10.244.1.2:34482 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097705s
	[INFO] 10.244.1.2:36018 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000150108s
	[INFO] 10.244.1.2:50932 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064203s
	[INFO] 10.244.1.2:38051 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000129007s
	[INFO] 10.244.0.3:41360 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000316917s
	[INFO] 10.244.0.3:60778 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000160008s
	[INFO] 10.244.0.3:57010 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000133407s
	[INFO] 10.244.0.3:43292 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000127407s
	[INFO] 10.244.1.2:34858 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135708s
	[INFO] 10.244.1.2:60624 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000269714s
	[INFO] 10.244.1.2:46116 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000100405s
	[INFO] 10.244.1.2:57306 - 5 "PTR IN 1.0.19.172.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 100 0.000138608s
	[INFO] 10.244.0.3:57177 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000084804s
	[INFO] 10.244.0.3:55463 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000274415s
	[INFO] 10.244.0.3:36032 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000185809s
	[INFO] 10.244.0.3:42058 - 5 "PTR IN 1.0.19.172.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 100 0.000083604s
	
	
	==> describe nodes <==
	Name:               multinode-314500
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-314500
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61
	                    minikube.k8s.io/name=multinode-314500
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_29T02_15_53_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 02:15:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-314500
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Feb 2024 02:20:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 02:19:58 +0000   Thu, 29 Feb 2024 02:15:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 02:19:58 +0000   Thu, 29 Feb 2024 02:15:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 02:19:58 +0000   Thu, 29 Feb 2024 02:15:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Feb 2024 02:19:58 +0000   Thu, 29 Feb 2024 02:16:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.2.165
	  Hostname:    multinode-314500
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 fcca135ba85d4e2a802ef18b508e0e63
	  System UUID:                d0919ea2-7b7b-e246-9348-925d639776b8
	  Boot ID:                    2a7c10fd-1651-4220-b9f5-aa3595c1b1ae
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-qcblm                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         49s
	  kube-system                 coredns-5dd5756b68-8g6tg                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m18s
	  kube-system                 etcd-multinode-314500                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m31s
	  kube-system                 kindnet-t9r77                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m18s
	  kube-system                 kube-apiserver-multinode-314500             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m31s
	  kube-system                 kube-controller-manager-multinode-314500    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m31s
	  kube-system                 kube-proxy-6r6j4                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-scheduler-multinode-314500             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m31s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m15s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m40s (x8 over 4m40s)  kubelet          Node multinode-314500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m40s (x8 over 4m40s)  kubelet          Node multinode-314500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m40s (x7 over 4m40s)  kubelet          Node multinode-314500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m31s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m31s                  kubelet          Node multinode-314500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m31s                  kubelet          Node multinode-314500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m31s                  kubelet          Node multinode-314500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m19s                  node-controller  Node multinode-314500 event: Registered Node multinode-314500 in Controller
	  Normal  NodeReady                4m7s                   kubelet          Node multinode-314500 status is now: NodeReady
	
	
	Name:               multinode-314500-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-314500-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61
	                    minikube.k8s.io/name=multinode-314500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_02_29T02_18_54_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 02:18:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-314500-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Feb 2024 02:20:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 02:19:54 +0000   Thu, 29 Feb 2024 02:18:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 02:19:54 +0000   Thu, 29 Feb 2024 02:18:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 02:19:54 +0000   Thu, 29 Feb 2024 02:18:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Feb 2024 02:19:54 +0000   Thu, 29 Feb 2024 02:19:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.5.202
	  Hostname:    multinode-314500-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 77aee02c4bee424dbfd3564939d0a240
	  System UUID:                b1627b4d-7d75-ed47-9ee8-e9d67e74df72
	  Boot ID:                    87f7a67a-8d8e-41a1-ae90-0f8737e86f14
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-826w2    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         49s
	  kube-system                 kindnet-6r7b8               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      90s
	  kube-system                 kube-proxy-4gbrl            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 81s                kube-proxy       
	  Normal  NodeHasSufficientMemory  90s (x5 over 92s)  kubelet          Node multinode-314500-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    90s (x5 over 92s)  kubelet          Node multinode-314500-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     90s (x5 over 92s)  kubelet          Node multinode-314500-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           89s                node-controller  Node multinode-314500-m02 event: Registered Node multinode-314500-m02 in Controller
	  Normal  NodeReady                74s                kubelet          Node multinode-314500-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +1.779304] systemd-fstab-generator[113]: Ignoring "noauto" option for root device
	[Feb29 02:14] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +40.611904] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.181228] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[Feb29 02:15] systemd-fstab-generator[907]: Ignoring "noauto" option for root device
	[  +0.106381] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.524061] systemd-fstab-generator[948]: Ignoring "noauto" option for root device
	[  +0.195671] systemd-fstab-generator[960]: Ignoring "noauto" option for root device
	[  +0.235266] systemd-fstab-generator[974]: Ignoring "noauto" option for root device
	[  +1.802878] systemd-fstab-generator[1132]: Ignoring "noauto" option for root device
	[  +0.200825] systemd-fstab-generator[1144]: Ignoring "noauto" option for root device
	[  +0.187739] systemd-fstab-generator[1156]: Ignoring "noauto" option for root device
	[  +0.272932] systemd-fstab-generator[1171]: Ignoring "noauto" option for root device
	[ +12.596345] systemd-fstab-generator[1278]: Ignoring "noauto" option for root device
	[  +0.100135] kauditd_printk_skb: 205 callbacks suppressed
	[  +9.124872] systemd-fstab-generator[1655]: Ignoring "noauto" option for root device
	[  +0.104351] kauditd_printk_skb: 51 callbacks suppressed
	[  +8.767706] systemd-fstab-generator[2631]: Ignoring "noauto" option for root device
	[  +0.137526] kauditd_printk_skb: 62 callbacks suppressed
	[Feb29 02:16] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.600907] kauditd_printk_skb: 29 callbacks suppressed
	[Feb29 02:19] hrtimer: interrupt took 2175903 ns
	[  +0.988605] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [e5bc2b41493b] <==
	{"level":"info","ts":"2024-02-29T02:15:45.444825Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"288caba846397842 switched to configuration voters=(2921898997477636162)"}
	{"level":"info","ts":"2024-02-29T02:15:45.449232Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b70ab9772a44d22c","local-member-id":"288caba846397842","added-peer-id":"288caba846397842","added-peer-peer-urls":["https://172.19.2.165:2380"]}
	{"level":"info","ts":"2024-02-29T02:15:45.445002Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.19.2.165:2380"}
	{"level":"info","ts":"2024-02-29T02:15:45.451781Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"288caba846397842","initial-advertise-peer-urls":["https://172.19.2.165:2380"],"listen-peer-urls":["https://172.19.2.165:2380"],"advertise-client-urls":["https://172.19.2.165:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.19.2.165:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-02-29T02:15:45.451813Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-29T02:15:45.456207Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.19.2.165:2380"}
	{"level":"info","ts":"2024-02-29T02:15:46.279614Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"288caba846397842 is starting a new election at term 1"}
	{"level":"info","ts":"2024-02-29T02:15:46.279927Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"288caba846397842 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-02-29T02:15:46.280297Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"288caba846397842 received MsgPreVoteResp from 288caba846397842 at term 1"}
	{"level":"info","ts":"2024-02-29T02:15:46.280432Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"288caba846397842 became candidate at term 2"}
	{"level":"info","ts":"2024-02-29T02:15:46.280578Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"288caba846397842 received MsgVoteResp from 288caba846397842 at term 2"}
	{"level":"info","ts":"2024-02-29T02:15:46.280732Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"288caba846397842 became leader at term 2"}
	{"level":"info","ts":"2024-02-29T02:15:46.280856Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 288caba846397842 elected leader 288caba846397842 at term 2"}
	{"level":"info","ts":"2024-02-29T02:15:46.285663Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T02:15:46.289486Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"288caba846397842","local-member-attributes":"{Name:multinode-314500 ClientURLs:[https://172.19.2.165:2379]}","request-path":"/0/members/288caba846397842/attributes","cluster-id":"b70ab9772a44d22c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-29T02:15:46.289834Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T02:15:46.292192Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-29T02:15:46.295691Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b70ab9772a44d22c","local-member-id":"288caba846397842","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T02:15:46.29636Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T02:15:46.296607Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T02:15:46.295902Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T02:15:46.298395Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.19.2.165:2379"}
	{"level":"info","ts":"2024-02-29T02:15:46.344121Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-29T02:15:46.352275Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-29T02:19:03.699393Z","caller":"traceutil/trace.go:171","msg":"trace[2003273810] transaction","detail":"{read_only:false; response_revision:609; number_of_response:1; }","duration":"117.265217ms","start":"2024-02-29T02:19:03.582107Z","end":"2024-02-29T02:19:03.699373Z","steps":["trace[2003273810] 'process raft request'  (duration: 117.135811ms)"],"step_count":1}
	
	
	==> kernel <==
	 02:20:23 up 6 min,  0 users,  load average: 0.27, 0.26, 0.13
	Linux multinode-314500 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [dd61788b0a0d] <==
	I0229 02:19:22.551575       1 main.go:250] Node multinode-314500-m02 has CIDR [10.244.1.0/24] 
	I0229 02:19:32.559415       1 main.go:223] Handling node with IPs: map[172.19.2.165:{}]
	I0229 02:19:32.559514       1 main.go:227] handling current node
	I0229 02:19:32.559566       1 main.go:223] Handling node with IPs: map[172.19.5.202:{}]
	I0229 02:19:32.559578       1 main.go:250] Node multinode-314500-m02 has CIDR [10.244.1.0/24] 
	I0229 02:19:42.574855       1 main.go:223] Handling node with IPs: map[172.19.2.165:{}]
	I0229 02:19:42.574895       1 main.go:227] handling current node
	I0229 02:19:42.574907       1 main.go:223] Handling node with IPs: map[172.19.5.202:{}]
	I0229 02:19:42.574914       1 main.go:250] Node multinode-314500-m02 has CIDR [10.244.1.0/24] 
	I0229 02:19:52.588384       1 main.go:223] Handling node with IPs: map[172.19.2.165:{}]
	I0229 02:19:52.588489       1 main.go:227] handling current node
	I0229 02:19:52.588504       1 main.go:223] Handling node with IPs: map[172.19.5.202:{}]
	I0229 02:19:52.588512       1 main.go:250] Node multinode-314500-m02 has CIDR [10.244.1.0/24] 
	I0229 02:20:02.595921       1 main.go:223] Handling node with IPs: map[172.19.2.165:{}]
	I0229 02:20:02.596025       1 main.go:227] handling current node
	I0229 02:20:02.596038       1 main.go:223] Handling node with IPs: map[172.19.5.202:{}]
	I0229 02:20:02.596047       1 main.go:250] Node multinode-314500-m02 has CIDR [10.244.1.0/24] 
	I0229 02:20:12.603058       1 main.go:223] Handling node with IPs: map[172.19.2.165:{}]
	I0229 02:20:12.603161       1 main.go:227] handling current node
	I0229 02:20:12.603174       1 main.go:223] Handling node with IPs: map[172.19.5.202:{}]
	I0229 02:20:12.603182       1 main.go:250] Node multinode-314500-m02 has CIDR [10.244.1.0/24] 
	I0229 02:20:22.614871       1 main.go:223] Handling node with IPs: map[172.19.2.165:{}]
	I0229 02:20:22.614930       1 main.go:227] handling current node
	I0229 02:20:22.614948       1 main.go:223] Handling node with IPs: map[172.19.5.202:{}]
	I0229 02:20:22.614959       1 main.go:250] Node multinode-314500-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [9815e253e1a0] <==
	I0229 02:15:48.203853       1 cache.go:39] Caches are synced for autoregister controller
	I0229 02:15:48.232330       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0229 02:15:48.232740       1 shared_informer.go:318] Caches are synced for configmaps
	I0229 02:15:48.234868       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0229 02:15:48.236962       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0229 02:15:48.238608       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0229 02:15:48.238634       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0229 02:15:48.240130       1 controller.go:624] quota admission added evaluator for: namespaces
	I0229 02:15:48.259371       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0229 02:15:48.288795       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0229 02:15:49.050665       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0229 02:15:49.064719       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0229 02:15:49.064738       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0229 02:15:49.909107       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0229 02:15:49.978633       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0229 02:15:50.069966       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0229 02:15:50.082357       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.19.2.165]
	I0229 02:15:50.083992       1 controller.go:624] quota admission added evaluator for: endpoints
	I0229 02:15:50.090388       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0229 02:15:50.155063       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0229 02:15:51.998918       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0229 02:15:52.011885       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0229 02:15:52.026788       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0229 02:16:05.076718       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0229 02:16:05.263867       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [26b1ab05f99a] <==
	I0229 02:16:05.737501       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="69.104µs"
	I0229 02:16:16.382507       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="902.949µs"
	I0229 02:16:16.409455       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="67.604µs"
	I0229 02:16:17.774033       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="155.809µs"
	I0229 02:16:17.862409       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="36.897ms"
	I0229 02:16:17.868791       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="83.404µs"
	I0229 02:16:19.467304       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0229 02:18:53.354208       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-314500-m02\" does not exist"
	I0229 02:18:53.368926       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-314500-m02" podCIDRs=["10.244.1.0/24"]
	I0229 02:18:53.372475       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-4gbrl"
	I0229 02:18:53.376875       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-6r7b8"
	I0229 02:18:54.492680       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-314500-m02"
	I0229 02:18:54.493161       1 event.go:307] "Event occurred" object="multinode-314500-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-314500-m02 event: Registered Node multinode-314500-m02 in Controller"
	I0229 02:19:09.849595       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-314500-m02"
	I0229 02:19:34.656812       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5b5d89c9d6 to 2"
	I0229 02:19:34.678854       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-826w2"
	I0229 02:19:34.689390       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-qcblm"
	I0229 02:19:34.698278       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="40.961829ms"
	I0229 02:19:34.725163       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="26.446345ms"
	I0229 02:19:34.739405       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="13.836452ms"
	I0229 02:19:34.740025       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="46.602µs"
	I0229 02:19:36.713325       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="8.816271ms"
	I0229 02:19:36.713610       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="108.606µs"
	I0229 02:19:37.478878       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="9.961832ms"
	I0229 02:19:37.479378       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="145.408µs"
	
	
	==> kube-proxy [c93e33130746] <==
	I0229 02:16:07.488822       1 server_others.go:69] "Using iptables proxy"
	I0229 02:16:07.511408       1 node.go:141] Successfully retrieved node IP: 172.19.2.165
	I0229 02:16:07.646052       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0229 02:16:07.646080       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0229 02:16:07.652114       1 server_others.go:152] "Using iptables Proxier"
	I0229 02:16:07.652346       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0229 02:16:07.652698       1 server.go:846] "Version info" version="v1.28.4"
	I0229 02:16:07.652712       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 02:16:07.654751       1 config.go:188] "Starting service config controller"
	I0229 02:16:07.655126       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0229 02:16:07.655241       1 config.go:97] "Starting endpoint slice config controller"
	I0229 02:16:07.655327       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0229 02:16:07.656324       1 config.go:315] "Starting node config controller"
	I0229 02:16:07.676099       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0229 02:16:07.679653       1 shared_informer.go:318] Caches are synced for node config
	I0229 02:16:07.757691       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0229 02:16:07.757737       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [ab0c4864aee5] <==
	W0229 02:15:48.237220       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0229 02:15:48.237295       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0229 02:15:49.044071       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0229 02:15:49.044214       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0229 02:15:49.085996       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0229 02:15:49.086626       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0229 02:15:49.106158       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0229 02:15:49.106848       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0229 02:15:49.126181       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0229 02:15:49.126580       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0229 02:15:49.196878       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0229 02:15:49.196987       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0229 02:15:49.236282       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0229 02:15:49.236658       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0229 02:15:49.372072       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0229 02:15:49.372116       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0229 02:15:49.403666       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0229 02:15:49.403942       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0229 02:15:49.418593       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0229 02:15:49.418838       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0229 02:15:49.492335       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0229 02:15:49.492758       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0229 02:15:49.585577       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0229 02:15:49.585986       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0229 02:15:52.113114       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 29 02:16:16 multinode-314500 kubelet[2651]: I0229 02:16:16.535861    2651 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9780520b-8ff9-408a-ab6f-41b63790ccd1-tmp\") pod \"storage-provisioner\" (UID: \"9780520b-8ff9-408a-ab6f-41b63790ccd1\") " pod="kube-system/storage-provisioner"
	Feb 29 02:16:17 multinode-314500 kubelet[2651]: I0229 02:16:17.798513    2651 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-8g6tg" podStartSLOduration=12.798466002 podCreationTimestamp="2024-02-29 02:16:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-29 02:16:17.773589554 +0000 UTC m=+25.814644975" watchObservedRunningTime="2024-02-29 02:16:17.798466002 +0000 UTC m=+25.839521323"
	Feb 29 02:16:17 multinode-314500 kubelet[2651]: I0229 02:16:17.817387    2651 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=5.817344026 podCreationTimestamp="2024-02-29 02:16:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-29 02:16:17.800801729 +0000 UTC m=+25.841857150" watchObservedRunningTime="2024-02-29 02:16:17.817344026 +0000 UTC m=+25.858399347"
	Feb 29 02:16:52 multinode-314500 kubelet[2651]: E0229 02:16:52.340637    2651 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 02:16:52 multinode-314500 kubelet[2651]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 02:16:52 multinode-314500 kubelet[2651]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 02:16:52 multinode-314500 kubelet[2651]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 02:16:52 multinode-314500 kubelet[2651]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 02:17:52 multinode-314500 kubelet[2651]: E0229 02:17:52.339883    2651 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 02:17:52 multinode-314500 kubelet[2651]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 02:17:52 multinode-314500 kubelet[2651]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 02:17:52 multinode-314500 kubelet[2651]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 02:17:52 multinode-314500 kubelet[2651]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 02:18:52 multinode-314500 kubelet[2651]: E0229 02:18:52.345880    2651 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 02:18:52 multinode-314500 kubelet[2651]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 02:18:52 multinode-314500 kubelet[2651]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 02:18:52 multinode-314500 kubelet[2651]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 02:18:52 multinode-314500 kubelet[2651]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 02:19:34 multinode-314500 kubelet[2651]: I0229 02:19:34.698903    2651 topology_manager.go:215] "Topology Admit Handler" podUID="97a45dff-5653-45e8-9aac-76dbca48c759" podNamespace="default" podName="busybox-5b5d89c9d6-qcblm"
	Feb 29 02:19:34 multinode-314500 kubelet[2651]: I0229 02:19:34.854716    2651 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fv6k\" (UniqueName: \"kubernetes.io/projected/97a45dff-5653-45e8-9aac-76dbca48c759-kube-api-access-4fv6k\") pod \"busybox-5b5d89c9d6-qcblm\" (UID: \"97a45dff-5653-45e8-9aac-76dbca48c759\") " pod="default/busybox-5b5d89c9d6-qcblm"
	Feb 29 02:19:52 multinode-314500 kubelet[2651]: E0229 02:19:52.340057    2651 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 02:19:52 multinode-314500 kubelet[2651]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 02:19:52 multinode-314500 kubelet[2651]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 02:19:52 multinode-314500 kubelet[2651]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 02:19:52 multinode-314500 kubelet[2651]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 02:20:15.967362    2400 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-314500 -n multinode-314500
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-314500 -n multinode-314500: (11.3153418s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-314500 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (54.12s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (232.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-314500 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-314500 -v 3 --alsologtostderr: exit status 90 (3m20.8413972s)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-314500
	* Starting worker node multinode-314500-m03 in cluster multinode-314500
	* Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 02:20:36.553491    9900 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0229 02:20:36.615344    9900 out.go:291] Setting OutFile to fd 1428 ...
	I0229 02:20:36.629285    9900 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:20:36.629285    9900 out.go:304] Setting ErrFile to fd 1432...
	I0229 02:20:36.629285    9900 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:20:36.643732    9900 mustload.go:65] Loading cluster: multinode-314500
	I0229 02:20:36.645627    9900 config.go:182] Loaded profile config "multinode-314500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 02:20:36.646975    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:20:38.646149    9900 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:20:38.646149    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:20:38.646255    9900 host.go:66] Checking if "multinode-314500" exists ...
	I0229 02:20:38.646491    9900 api_server.go:166] Checking apiserver status ...
	I0229 02:20:38.657893    9900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:20:38.657893    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:20:40.648183    9900 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:20:40.648183    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:20:40.648387    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:20:43.025719    9900 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:20:43.026361    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:20:43.026777    9900 sshutil.go:53] new ssh client: &{IP:172.19.2.165 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\id_rsa Username:docker}
	I0229 02:20:43.138149    9900 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.4800074s)
	I0229 02:20:43.147721    9900 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2018/cgroup
	W0229 02:20:43.170026    9900 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2018/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:20:43.182245    9900 ssh_runner.go:195] Run: ls
	I0229 02:20:43.194320    9900 api_server.go:253] Checking apiserver healthz at https://172.19.2.165:8443/healthz ...
	I0229 02:20:43.203814    9900 api_server.go:279] https://172.19.2.165:8443/healthz returned 200:
	ok
	I0229 02:20:43.205026    9900 out.go:177] * Adding node m03 to cluster multinode-314500
	I0229 02:20:43.206492    9900 config.go:182] Loaded profile config "multinode-314500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 02:20:43.206671    9900 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\config.json ...
	I0229 02:20:43.209691    9900 out.go:177] * Starting worker node multinode-314500-m03 in cluster multinode-314500
	I0229 02:20:43.210443    9900 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 02:20:43.210621    9900 preload.go:148] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0229 02:20:43.210621    9900 cache.go:56] Caching tarball of preloaded images
	I0229 02:20:43.210710    9900 preload.go:174] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 02:20:43.210710    9900 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0229 02:20:43.211556    9900 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\config.json ...
	I0229 02:20:43.223812    9900 start.go:365] acquiring machines lock for multinode-314500-m03: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 02:20:43.224121    9900 start.go:369] acquired machines lock for "multinode-314500-m03" in 309.8µs
	I0229 02:20:43.224305    9900 start.go:93] Provisioning new machine with config: &{Name:multinode-314500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-314500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.19.2.165 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.5.202 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ing
ress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:d
ocker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0229 02:20:43.224305    9900 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0229 02:20:43.225594    9900 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0229 02:20:43.225971    9900 start.go:159] libmachine.API.Create for "multinode-314500" (driver="hyperv")
	I0229 02:20:43.225971    9900 client.go:168] LocalClient.Create starting
	I0229 02:20:43.226569    9900 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0229 02:20:43.227214    9900 main.go:141] libmachine: Decoding PEM data...
	I0229 02:20:43.227214    9900 main.go:141] libmachine: Parsing certificate...
	I0229 02:20:43.227352    9900 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0229 02:20:43.227352    9900 main.go:141] libmachine: Decoding PEM data...
	I0229 02:20:43.227352    9900 main.go:141] libmachine: Parsing certificate...
	I0229 02:20:43.227937    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0229 02:20:45.056311    9900 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0229 02:20:45.056311    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:20:45.056660    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0229 02:20:46.755370    9900 main.go:141] libmachine: [stdout =====>] : False
	
	I0229 02:20:46.755370    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:20:46.755827    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0229 02:20:48.208581    9900 main.go:141] libmachine: [stdout =====>] : True
	
	I0229 02:20:48.209148    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:20:48.209359    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0229 02:20:51.773494    9900 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0229 02:20:51.773494    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:20:51.775254    9900 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0229 02:20:52.139665    9900 main.go:141] libmachine: Creating SSH key...
	I0229 02:20:52.568114    9900 main.go:141] libmachine: Creating VM...
	I0229 02:20:52.568114    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0229 02:20:55.314594    9900 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0229 02:20:55.314594    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:20:55.314707    9900 main.go:141] libmachine: Using switch "Default Switch"
	I0229 02:20:55.314783    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0229 02:20:56.994344    9900 main.go:141] libmachine: [stdout =====>] : True
	
	I0229 02:20:56.994344    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:20:56.994423    9900 main.go:141] libmachine: Creating VHD
	I0229 02:20:56.994492    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0229 02:21:00.614821    9900 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m03\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : E2450CDB-2542-40A0-9C64-326C328C0962
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0229 02:21:00.614821    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:21:00.615763    9900 main.go:141] libmachine: Writing magic tar header
	I0229 02:21:00.615763    9900 main.go:141] libmachine: Writing SSH key tar header
	I0229 02:21:00.625159    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0229 02:21:03.644231    9900 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:21:03.644347    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:21:03.644347    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m03\disk.vhd' -SizeBytes 20000MB
	I0229 02:21:06.045684    9900 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:21:06.046114    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:21:06.046202    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-314500-m03 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0229 02:21:09.439010    9900 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-314500-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0229 02:21:09.439010    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:21:09.439113    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-314500-m03 -DynamicMemoryEnabled $false
	I0229 02:21:11.565237    9900 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:21:11.566240    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:21:11.566299    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-314500-m03 -Count 2
	I0229 02:21:13.612778    9900 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:21:13.612874    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:21:13.612874    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-314500-m03 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m03\boot2docker.iso'
	I0229 02:21:16.023616    9900 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:21:16.023616    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:21:16.024391    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-314500-m03 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m03\disk.vhd'
	I0229 02:21:18.537708    9900 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:21:18.537708    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:21:18.537708    9900 main.go:141] libmachine: Starting VM...
	I0229 02:21:18.537708    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-314500-m03
	I0229 02:21:21.233748    9900 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:21:21.234370    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:21:21.234415    9900 main.go:141] libmachine: Waiting for host to start...
	I0229 02:21:21.234465    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m03 ).state
	I0229 02:21:23.386605    9900 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:21:23.386605    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:21:23.387393    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 02:21:25.754550    9900 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:21:25.755374    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:21:26.770096    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m03 ).state
	I0229 02:21:28.860219    9900 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:21:28.861084    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:21:28.861084    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 02:21:31.255167    9900 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:21:31.256247    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:21:32.261797    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m03 ).state
	I0229 02:21:34.351150    9900 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:21:34.351150    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:21:34.351624    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 02:21:36.734159    9900 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:21:36.734205    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:21:37.736144    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m03 ).state
	I0229 02:21:39.789387    9900 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:21:39.789387    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:21:39.790159    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 02:21:42.111437    9900 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:21:42.112379    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:21:43.123361    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m03 ).state
	I0229 02:21:45.163181    9900 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:21:45.163331    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:21:45.163525    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 02:21:47.610620    9900 main.go:141] libmachine: [stdout =====>] : 172.19.12.66
	
	I0229 02:21:47.611102    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:21:47.611102    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m03 ).state
	I0229 02:21:49.630876    9900 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:21:49.631577    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:21:49.631655    9900 machine.go:88] provisioning docker machine ...
	I0229 02:21:49.631655    9900 buildroot.go:166] provisioning hostname "multinode-314500-m03"
	I0229 02:21:49.631730    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m03 ).state
	I0229 02:21:51.671246    9900 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:21:51.671246    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:21:51.671346    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 02:21:54.101431    9900 main.go:141] libmachine: [stdout =====>] : 172.19.12.66
	
	I0229 02:21:54.101431    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:21:54.107796    9900 main.go:141] libmachine: Using SSH client type: native
	I0229 02:21:54.119243    9900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.12.66 22 <nil> <nil>}
	I0229 02:21:54.119243    9900 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-314500-m03 && echo "multinode-314500-m03" | sudo tee /etc/hostname
	I0229 02:21:54.282200    9900 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-314500-m03
	
	I0229 02:21:54.282200    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m03 ).state
	I0229 02:21:56.302359    9900 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:21:56.302448    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:21:56.302520    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 02:21:58.698804    9900 main.go:141] libmachine: [stdout =====>] : 172.19.12.66
	
	I0229 02:21:58.699143    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:21:58.703050    9900 main.go:141] libmachine: Using SSH client type: native
	I0229 02:21:58.703547    9900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.12.66 22 <nil> <nil>}
	I0229 02:21:58.703547    9900 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-314500-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-314500-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-314500-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:21:58.861938    9900 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:21:58.862046    9900 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0229 02:21:58.862046    9900 buildroot.go:174] setting up certificates
	I0229 02:21:58.862046    9900 provision.go:83] configureAuth start
	I0229 02:21:58.862173    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m03 ).state
	I0229 02:22:00.867178    9900 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:22:00.867178    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:22:00.867834    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 02:22:03.309051    9900 main.go:141] libmachine: [stdout =====>] : 172.19.12.66
	
	I0229 02:22:03.309051    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:22:03.309156    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m03 ).state
	I0229 02:22:05.325240    9900 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:22:05.326165    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:22:05.326307    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 02:22:07.748482    9900 main.go:141] libmachine: [stdout =====>] : 172.19.12.66
	
	I0229 02:22:07.748603    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:22:07.748603    9900 provision.go:138] copyHostCerts
	I0229 02:22:07.749052    9900 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0229 02:22:07.749052    9900 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0229 02:22:07.749431    9900 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0229 02:22:07.750722    9900 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0229 02:22:07.750722    9900 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0229 02:22:07.750905    9900 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0229 02:22:07.752123    9900 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0229 02:22:07.752123    9900 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0229 02:22:07.752491    9900 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1675 bytes)
	I0229 02:22:07.753401    9900 provision.go:112] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-314500-m03 san=[172.19.12.66 172.19.12.66 localhost 127.0.0.1 minikube multinode-314500-m03]
	I0229 02:22:07.929009    9900 provision.go:172] copyRemoteCerts
	I0229 02:22:07.938099    9900 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:22:07.938213    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m03 ).state
	I0229 02:22:09.941961    9900 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:22:09.942135    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:22:09.942254    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 02:22:12.331095    9900 main.go:141] libmachine: [stdout =====>] : 172.19.12.66
	
	I0229 02:22:12.331095    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:22:12.331095    9900 sshutil.go:53] new ssh client: &{IP:172.19.12.66 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m03\id_rsa Username:docker}
	I0229 02:22:12.452330    9900 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5138714s)
	I0229 02:22:12.452774    9900 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 02:22:12.502031    9900 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1237 bytes)
	I0229 02:22:12.550200    9900 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 02:22:12.596999    9900 provision.go:86] duration metric: configureAuth took 13.7340643s
	I0229 02:22:12.596999    9900 buildroot.go:189] setting minikube options for container-runtime
	I0229 02:22:12.597996    9900 config.go:182] Loaded profile config "multinode-314500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 02:22:12.597996    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m03 ).state
	I0229 02:22:14.587509    9900 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:22:14.587597    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:22:14.587597    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 02:22:16.973793    9900 main.go:141] libmachine: [stdout =====>] : 172.19.12.66
	
	I0229 02:22:16.974045    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:22:16.978031    9900 main.go:141] libmachine: Using SSH client type: native
	I0229 02:22:16.978190    9900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.12.66 22 <nil> <nil>}
	I0229 02:22:16.978190    9900 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 02:22:17.120803    9900 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0229 02:22:17.120803    9900 buildroot.go:70] root file system type: tmpfs
	I0229 02:22:17.121215    9900 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 02:22:17.121402    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m03 ).state
	I0229 02:22:19.159172    9900 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:22:19.159172    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:22:19.159172    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 02:22:21.570994    9900 main.go:141] libmachine: [stdout =====>] : 172.19.12.66
	
	I0229 02:22:21.570994    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:22:21.575392    9900 main.go:141] libmachine: Using SSH client type: native
	I0229 02:22:21.575910    9900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.12.66 22 <nil> <nil>}
	I0229 02:22:21.576000    9900 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 02:22:21.741134    9900 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 02:22:21.741701    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m03 ).state
	I0229 02:22:23.758290    9900 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:22:23.758517    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:22:23.758517    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 02:22:26.162833    9900 main.go:141] libmachine: [stdout =====>] : 172.19.12.66
	
	I0229 02:22:26.163392    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:22:26.169487    9900 main.go:141] libmachine: Using SSH client type: native
	I0229 02:22:26.170209    9900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.12.66 22 <nil> <nil>}
	I0229 02:22:26.170250    9900 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 02:22:27.207173    9900 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0229 02:22:27.207173    9900 machine.go:91] provisioned docker machine in 37.5734329s
	I0229 02:22:27.207173    9900 client.go:171] LocalClient.Create took 1m43.975425s
	I0229 02:22:27.207173    9900 start.go:167] duration metric: libmachine.API.Create for "multinode-314500" took 1m43.975425s
	I0229 02:22:27.207173    9900 start.go:300] post-start starting for "multinode-314500-m03" (driver="hyperv")
	I0229 02:22:27.207173    9900 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:22:27.217733    9900 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:22:27.217733    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m03 ).state
	I0229 02:22:29.241125    9900 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:22:29.241125    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:22:29.241340    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 02:22:31.684864    9900 main.go:141] libmachine: [stdout =====>] : 172.19.12.66
	
	I0229 02:22:31.684864    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:22:31.685933    9900 sshutil.go:53] new ssh client: &{IP:172.19.12.66 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m03\id_rsa Username:docker}
	I0229 02:22:31.804376    9900 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5863879s)
	I0229 02:22:31.816684    9900 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:22:31.824740    9900 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 02:22:31.824740    9900 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0229 02:22:31.825362    9900 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0229 02:22:31.826165    9900 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem -> 33122.pem in /etc/ssl/certs
	I0229 02:22:31.837317    9900 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 02:22:31.856893    9900 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem --> /etc/ssl/certs/33122.pem (1708 bytes)
	I0229 02:22:31.906692    9900 start.go:303] post-start completed in 4.6991577s
	I0229 02:22:31.908603    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m03 ).state
	I0229 02:22:33.917418    9900 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:22:33.917418    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:22:33.917418    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 02:22:36.332891    9900 main.go:141] libmachine: [stdout =====>] : 172.19.12.66
	
	I0229 02:22:36.332891    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:22:36.333699    9900 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\config.json ...
	I0229 02:22:36.336849    9900 start.go:128] duration metric: createHost completed in 1m53.1062594s
	I0229 02:22:36.336961    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m03 ).state
	I0229 02:22:38.355005    9900 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:22:38.355109    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:22:38.355490    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 02:22:40.747944    9900 main.go:141] libmachine: [stdout =====>] : 172.19.12.66
	
	I0229 02:22:40.747944    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:22:40.752607    9900 main.go:141] libmachine: Using SSH client type: native
	I0229 02:22:40.752667    9900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.12.66 22 <nil> <nil>}
	I0229 02:22:40.752667    9900 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0229 02:22:40.891193    9900 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709173361.060494858
	
	I0229 02:22:40.891299    9900 fix.go:206] guest clock: 1709173361.060494858
	I0229 02:22:40.891299    9900 fix.go:219] Guest: 2024-02-29 02:22:41.060494858 +0000 UTC Remote: 2024-02-29 02:22:36.3369615 +0000 UTC m=+119.869386701 (delta=4.723533358s)
	I0229 02:22:40.891410    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m03 ).state
	I0229 02:22:42.857158    9900 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:22:42.857158    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:22:42.857391    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 02:22:45.237659    9900 main.go:141] libmachine: [stdout =====>] : 172.19.12.66
	
	I0229 02:22:45.237659    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:22:45.241850    9900 main.go:141] libmachine: Using SSH client type: native
	I0229 02:22:45.241850    9900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.12.66 22 <nil> <nil>}
	I0229 02:22:45.241850    9900 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709173360
	I0229 02:22:45.392698    9900 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Feb 29 02:22:40 UTC 2024
	
	I0229 02:22:45.392698    9900 fix.go:226] clock set: Thu Feb 29 02:22:40 UTC 2024
	 (err=<nil>)
	I0229 02:22:45.392698    9900 start.go:83] releasing machines lock for "multinode-314500-m03", held for 2m2.1617901s
	I0229 02:22:45.393033    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m03 ).state
	I0229 02:22:47.394774    9900 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:22:47.394774    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:22:47.395458    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 02:22:49.805987    9900 main.go:141] libmachine: [stdout =====>] : 172.19.12.66
	
	I0229 02:22:49.805987    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:22:49.808957    9900 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:22:49.808957    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m03 ).state
	I0229 02:22:49.819055    9900 ssh_runner.go:195] Run: systemctl --version
	I0229 02:22:49.819055    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m03 ).state
	I0229 02:22:51.842460    9900 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:22:51.842683    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:22:51.842760    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 02:22:51.860218    9900 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:22:51.860218    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:22:51.860218    9900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 02:22:54.293581    9900 main.go:141] libmachine: [stdout =====>] : 172.19.12.66
	
	I0229 02:22:54.293581    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:22:54.293581    9900 sshutil.go:53] new ssh client: &{IP:172.19.12.66 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m03\id_rsa Username:docker}
	I0229 02:22:54.320384    9900 main.go:141] libmachine: [stdout =====>] : 172.19.12.66
	
	I0229 02:22:54.321383    9900 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:22:54.321383    9900 sshutil.go:53] new ssh client: &{IP:172.19.12.66 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m03\id_rsa Username:docker}
	I0229 02:22:54.457802    9900 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.6485074s)
	I0229 02:22:54.457802    9900 ssh_runner.go:235] Completed: systemctl --version: (4.6384098s)
	I0229 02:22:54.466125    9900 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 02:22:54.475493    9900 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 02:22:54.486563    9900 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:22:54.523039    9900 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 02:22:54.523039    9900 start.go:475] detecting cgroup driver to use...
	I0229 02:22:54.523582    9900 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:22:54.574694    9900 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0229 02:22:54.602678    9900 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 02:22:54.622710    9900 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 02:22:54.631681    9900 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 02:22:54.662462    9900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 02:22:54.693662    9900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 02:22:54.728731    9900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 02:22:54.756733    9900 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:22:54.789925    9900 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 02:22:54.818601    9900 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:22:54.846616    9900 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:22:54.874619    9900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:22:55.056250    9900 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 02:22:55.092726    9900 start.go:475] detecting cgroup driver to use...
	I0229 02:22:55.105728    9900 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 02:22:55.146614    9900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:22:55.175245    9900 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 02:22:55.217181    9900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:22:55.251985    9900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 02:22:55.284959    9900 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 02:22:55.340843    9900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 02:22:55.365973    9900 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:22:55.410316    9900 ssh_runner.go:195] Run: which cri-dockerd
	I0229 02:22:55.426025    9900 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 02:22:55.444419    9900 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0229 02:22:55.485034    9900 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 02:22:55.685879    9900 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 02:22:55.871362    9900 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 02:22:55.871689    9900 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 02:22:55.920254    9900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:22:56.105097    9900 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 02:23:57.217276    9900 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1087165s)
	I0229 02:23:57.226171    9900 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0229 02:23:57.257141    9900 out.go:177] 
	W0229 02:23:57.258115    9900 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Feb 29 02:22:26 multinode-314500-m03 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 02:22:26 multinode-314500-m03 dockerd[642]: time="2024-02-29T02:22:26.950050770Z" level=info msg="Starting up"
	Feb 29 02:22:26 multinode-314500-m03 dockerd[642]: time="2024-02-29T02:22:26.951199449Z" level=info msg="containerd not running, starting managed containerd"
	Feb 29 02:22:26 multinode-314500-m03 dockerd[642]: time="2024-02-29T02:22:26.954364818Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=650
	Feb 29 02:22:26 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:26.986557843Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.017101459Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.017211184Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.017283600Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.017299804Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.017397227Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.017429934Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.017657587Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.017756509Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.017778314Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.017790117Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.017887840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.018262326Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.021669410Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.021775134Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.021978981Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.022011389Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.022215336Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.022365270Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.022453091Z" level=info msg="metadata content store policy set" policy=shared
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.031840552Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.031971582Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.031997388Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.032015492Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.032032296Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.032151423Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.032498903Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.032631434Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.032738859Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.032763764Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.032780568Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.032797072Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.032820777Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.032838682Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.032854985Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.032869989Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.032884392Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.032902196Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.032997718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.033019123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.033037127Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.033052431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.033066234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.033080637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.033093740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.033107643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.033122747Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.033138350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.033151153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.033166657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.033180460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.033197364Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.033220469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.033234173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.033247776Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.033288185Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.033320592Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.033333996Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.033345998Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.033435819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.033547345Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.033568049Z" level=info msg="NRI interface is disabled by configuration."
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.033962040Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.034121277Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.034187892Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.034288815Z" level=info msg="containerd successfully booted in 0.048692s"
	Feb 29 02:22:27 multinode-314500-m03 dockerd[642]: time="2024-02-29T02:22:27.065565716Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 29 02:22:27 multinode-314500-m03 dockerd[642]: time="2024-02-29T02:22:27.078170718Z" level=info msg="Loading containers: start."
	Feb 29 02:22:27 multinode-314500-m03 dockerd[642]: time="2024-02-29T02:22:27.302452742Z" level=info msg="Loading containers: done."
	Feb 29 02:22:27 multinode-314500-m03 dockerd[642]: time="2024-02-29T02:22:27.317154553Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Feb 29 02:22:27 multinode-314500-m03 dockerd[642]: time="2024-02-29T02:22:27.317271980Z" level=info msg="Daemon has completed initialization"
	Feb 29 02:22:27 multinode-314500-m03 dockerd[642]: time="2024-02-29T02:22:27.374027345Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 29 02:22:27 multinode-314500-m03 dockerd[642]: time="2024-02-29T02:22:27.374291006Z" level=info msg="API listen on [::]:2376"
	Feb 29 02:22:27 multinode-314500-m03 systemd[1]: Started Docker Application Container Engine.
	Feb 29 02:22:56 multinode-314500-m03 systemd[1]: Stopping Docker Application Container Engine...
	Feb 29 02:22:56 multinode-314500-m03 dockerd[642]: time="2024-02-29T02:22:56.304481815Z" level=info msg="Processing signal 'terminated'"
	Feb 29 02:22:56 multinode-314500-m03 dockerd[642]: time="2024-02-29T02:22:56.305749765Z" level=info msg="Daemon shutdown complete"
	Feb 29 02:22:56 multinode-314500-m03 dockerd[642]: time="2024-02-29T02:22:56.306549196Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Feb 29 02:22:56 multinode-314500-m03 dockerd[642]: time="2024-02-29T02:22:56.306660801Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Feb 29 02:22:56 multinode-314500-m03 dockerd[642]: time="2024-02-29T02:22:56.307294526Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Feb 29 02:22:57 multinode-314500-m03 systemd[1]: docker.service: Deactivated successfully.
	Feb 29 02:22:57 multinode-314500-m03 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 02:22:57 multinode-314500-m03 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 02:22:57 multinode-314500-m03 dockerd[982]: time="2024-02-29T02:22:57.380445758Z" level=info msg="Starting up"
	Feb 29 02:23:57 multinode-314500-m03 dockerd[982]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Feb 29 02:23:57 multinode-314500-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Feb 29 02:23:57 multinode-314500-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Feb 29 02:23:57 multinode-314500-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Feb 29 02:22:26 multinode-314500-m03 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 02:22:26 multinode-314500-m03 dockerd[642]: time="2024-02-29T02:22:26.950050770Z" level=info msg="Starting up"
	Feb 29 02:22:26 multinode-314500-m03 dockerd[642]: time="2024-02-29T02:22:26.951199449Z" level=info msg="containerd not running, starting managed containerd"
	Feb 29 02:22:26 multinode-314500-m03 dockerd[642]: time="2024-02-29T02:22:26.954364818Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=650
	Feb 29 02:22:26 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:26.986557843Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.017101459Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.017211184Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.017283600Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.017299804Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.017397227Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.017429934Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.017657587Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.017756509Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.017778314Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.017790117Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.017887840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.018262326Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.021669410Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.021775134Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.021978981Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.022011389Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.022215336Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.022365270Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.022453091Z" level=info msg="metadata content store policy set" policy=shared
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.031840552Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.031971582Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.031997388Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.032015492Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.032032296Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.032151423Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.032498903Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.032631434Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.032738859Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.032763764Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.032780568Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.032797072Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.032820777Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.032838682Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.032854985Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.032869989Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.032884392Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.032902196Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.032997718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.033019123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.033037127Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.033052431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.033066234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.033080637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.033093740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.033107643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.033122747Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.033138350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.033151153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.033166657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.033180460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.033197364Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.033220469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.033234173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.033247776Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.033288185Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.033320592Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.033333996Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.033345998Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.033435819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.033547345Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.033568049Z" level=info msg="NRI interface is disabled by configuration."
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.033962040Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.034121277Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.034187892Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Feb 29 02:22:27 multinode-314500-m03 dockerd[650]: time="2024-02-29T02:22:27.034288815Z" level=info msg="containerd successfully booted in 0.048692s"
	Feb 29 02:22:27 multinode-314500-m03 dockerd[642]: time="2024-02-29T02:22:27.065565716Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 29 02:22:27 multinode-314500-m03 dockerd[642]: time="2024-02-29T02:22:27.078170718Z" level=info msg="Loading containers: start."
	Feb 29 02:22:27 multinode-314500-m03 dockerd[642]: time="2024-02-29T02:22:27.302452742Z" level=info msg="Loading containers: done."
	Feb 29 02:22:27 multinode-314500-m03 dockerd[642]: time="2024-02-29T02:22:27.317154553Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Feb 29 02:22:27 multinode-314500-m03 dockerd[642]: time="2024-02-29T02:22:27.317271980Z" level=info msg="Daemon has completed initialization"
	Feb 29 02:22:27 multinode-314500-m03 dockerd[642]: time="2024-02-29T02:22:27.374027345Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 29 02:22:27 multinode-314500-m03 dockerd[642]: time="2024-02-29T02:22:27.374291006Z" level=info msg="API listen on [::]:2376"
	Feb 29 02:22:27 multinode-314500-m03 systemd[1]: Started Docker Application Container Engine.
	Feb 29 02:22:56 multinode-314500-m03 systemd[1]: Stopping Docker Application Container Engine...
	Feb 29 02:22:56 multinode-314500-m03 dockerd[642]: time="2024-02-29T02:22:56.304481815Z" level=info msg="Processing signal 'terminated'"
	Feb 29 02:22:56 multinode-314500-m03 dockerd[642]: time="2024-02-29T02:22:56.305749765Z" level=info msg="Daemon shutdown complete"
	Feb 29 02:22:56 multinode-314500-m03 dockerd[642]: time="2024-02-29T02:22:56.306549196Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Feb 29 02:22:56 multinode-314500-m03 dockerd[642]: time="2024-02-29T02:22:56.306660801Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Feb 29 02:22:56 multinode-314500-m03 dockerd[642]: time="2024-02-29T02:22:56.307294526Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Feb 29 02:22:57 multinode-314500-m03 systemd[1]: docker.service: Deactivated successfully.
	Feb 29 02:22:57 multinode-314500-m03 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 02:22:57 multinode-314500-m03 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 02:22:57 multinode-314500-m03 dockerd[982]: time="2024-02-29T02:22:57.380445758Z" level=info msg="Starting up"
	Feb 29 02:23:57 multinode-314500-m03 dockerd[982]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Feb 29 02:23:57 multinode-314500-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Feb 29 02:23:57 multinode-314500-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Feb 29 02:23:57 multinode-314500-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0229 02:23:57.258644    9900 out.go:239] * 
	* 
	W0229 02:23:57.265659    9900 out.go:239] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube_node_109417fbff9a3b9650da7ef19b4c6539dd55bbf9_0.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube_node_109417fbff9a3b9650da7ef19b4c6539dd55bbf9_0.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 02:23:57.266382    9900 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:113: failed to add node to current cluster. args "out/minikube-windows-amd64.exe node add -p multinode-314500 -v 3 --alsologtostderr" : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-314500 -n multinode-314500
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-314500 -n multinode-314500: (11.4630515s)
helpers_test.go:244: <<< TestMultiNode/serial/AddNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/AddNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-314500 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-314500 logs -n 25: (7.8200351s)
helpers_test.go:252: TestMultiNode/serial/AddNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| delete  | -p mount-start-1-141600                           | mount-start-1-141600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:09 UTC | 29 Feb 24 02:10 UTC |
	|         | --alsologtostderr -v=5                            |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-141600 ssh -- ls                    | mount-start-2-141600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:10 UTC | 29 Feb 24 02:10 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| stop    | -p mount-start-2-141600                           | mount-start-2-141600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:10 UTC | 29 Feb 24 02:10 UTC |
	| start   | -p mount-start-2-141600                           | mount-start-2-141600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:10 UTC | 29 Feb 24 02:12 UTC |
	| mount   | C:\Users\jenkins.minikube5:/minikube-host         | mount-start-2-141600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:12 UTC |                     |
	|         | --profile mount-start-2-141600 --v 0              |                      |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid         |                      |                   |         |                     |                     |
	|         |                                                 0 |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-141600 ssh -- ls                    | mount-start-2-141600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:12 UTC | 29 Feb 24 02:12 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-2-141600                           | mount-start-2-141600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:12 UTC | 29 Feb 24 02:12 UTC |
	| delete  | -p mount-start-1-141600                           | mount-start-1-141600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:12 UTC | 29 Feb 24 02:12 UTC |
	| start   | -p multinode-314500                               | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:13 UTC | 29 Feb 24 02:19 UTC |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- apply -f                   | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- rollout                    | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- get pods -o                | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- get pods -o                | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- exec                       | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | busybox-5b5d89c9d6-826w2 --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- exec                       | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | busybox-5b5d89c9d6-qcblm --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- exec                       | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | busybox-5b5d89c9d6-826w2 --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- exec                       | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | busybox-5b5d89c9d6-qcblm --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- exec                       | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | busybox-5b5d89c9d6-826w2 -- nslookup              |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- exec                       | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | busybox-5b5d89c9d6-qcblm -- nslookup              |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- get pods -o                | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- exec                       | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | busybox-5b5d89c9d6-826w2                          |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- exec                       | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC |                     |
	|         | busybox-5b5d89c9d6-826w2 -- sh                    |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.19.0.1                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- exec                       | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | busybox-5b5d89c9d6-qcblm                          |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- exec                       | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC |                     |
	|         | busybox-5b5d89c9d6-qcblm -- sh                    |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.19.0.1                           |                      |                   |         |                     |                     |
	| node    | add -p multinode-314500 -v 3                      | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:20 UTC |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 02:13:00
	Running on machine: minikube5
	Binary: Built with gc go1.22.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 02:13:00.149906    8584 out.go:291] Setting OutFile to fd 1312 ...
	I0229 02:13:00.150227    8584 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:13:00.150227    8584 out.go:304] Setting ErrFile to fd 1328...
	I0229 02:13:00.150227    8584 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:13:00.171700    8584 out.go:298] Setting JSON to false
	I0229 02:13:00.175741    8584 start.go:129] hostinfo: {"hostname":"minikube5","uptime":269007,"bootTime":1708903773,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0229 02:13:00.175741    8584 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 02:13:00.177046    8584 out.go:177] * [multinode-314500] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0229 02:13:00.177046    8584 notify.go:220] Checking for updates...
	I0229 02:13:00.178097    8584 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 02:13:00.178485    8584 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 02:13:00.178485    8584 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0229 02:13:00.179850    8584 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 02:13:00.180273    8584 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 02:13:00.181791    8584 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 02:13:05.205228    8584 out.go:177] * Using the hyperv driver based on user configuration
	I0229 02:13:05.206271    8584 start.go:299] selected driver: hyperv
	I0229 02:13:05.206271    8584 start.go:903] validating driver "hyperv" against <nil>
	I0229 02:13:05.206359    8584 start.go:914] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 02:13:05.251841    8584 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 02:13:05.252685    8584 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 02:13:05.252685    8584 cni.go:84] Creating CNI manager for ""
	I0229 02:13:05.252685    8584 cni.go:136] 0 nodes found, recommending kindnet
	I0229 02:13:05.252685    8584 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0229 02:13:05.252685    8584 start_flags.go:323] config:
	{Name:multinode-314500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-314500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:13:05.253940    8584 iso.go:125] acquiring lock: {Name:mk91f2ee29fbed5605669750e8cfa308a1229357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:13:05.255538    8584 out.go:177] * Starting control plane node multinode-314500 in cluster multinode-314500
	I0229 02:13:05.256114    8584 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 02:13:05.256302    8584 preload.go:148] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0229 02:13:05.256344    8584 cache.go:56] Caching tarball of preloaded images
	I0229 02:13:05.256572    8584 preload.go:174] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 02:13:05.256572    8584 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0229 02:13:05.257361    8584 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\config.json ...
	I0229 02:13:05.257455    8584 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\config.json: {Name:mkd3169e69638735699adbb2ff8489bce372cb2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:13:05.258503    8584 start.go:365] acquiring machines lock for multinode-314500: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 02:13:05.258691    8584 start.go:369] acquired machines lock for "multinode-314500" in 152µs
	I0229 02:13:05.258871    8584 start.go:93] Provisioning new machine with config: &{Name:multinode-314500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-314500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 02:13:05.258976    8584 start.go:125] createHost starting for "" (driver="hyperv")
	I0229 02:13:05.259751    8584 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0229 02:13:05.259891    8584 start.go:159] libmachine.API.Create for "multinode-314500" (driver="hyperv")
	I0229 02:13:05.259891    8584 client.go:168] LocalClient.Create starting
	I0229 02:13:05.260497    8584 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0229 02:13:05.260497    8584 main.go:141] libmachine: Decoding PEM data...
	I0229 02:13:05.260497    8584 main.go:141] libmachine: Parsing certificate...
	I0229 02:13:05.260497    8584 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0229 02:13:05.261186    8584 main.go:141] libmachine: Decoding PEM data...
	I0229 02:13:05.261186    8584 main.go:141] libmachine: Parsing certificate...
	I0229 02:13:05.261186    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0229 02:13:07.286347    8584 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0229 02:13:07.286422    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:07.286509    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0229 02:13:08.976234    8584 main.go:141] libmachine: [stdout =====>] : False
	
	I0229 02:13:08.976234    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:08.976234    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0229 02:13:10.405564    8584 main.go:141] libmachine: [stdout =====>] : True
	
	I0229 02:13:10.405718    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:10.405718    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0229 02:13:13.896897    8584 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0229 02:13:13.896976    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:13.899798    8584 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0229 02:13:14.290871    8584 main.go:141] libmachine: Creating SSH key...
	I0229 02:13:14.527065    8584 main.go:141] libmachine: Creating VM...
	I0229 02:13:14.527065    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0229 02:13:17.265891    8584 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0229 02:13:17.266097    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:17.266097    8584 main.go:141] libmachine: Using switch "Default Switch"
	I0229 02:13:17.266238    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0229 02:13:18.963078    8584 main.go:141] libmachine: [stdout =====>] : True
	
	I0229 02:13:18.963078    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:18.963078    8584 main.go:141] libmachine: Creating VHD
	I0229 02:13:18.964222    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\fixed.vhd' -SizeBytes 10MB -Fixed
	I0229 02:13:22.594784    8584 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 884B5862-3469-4CFD-B182-8E081E737039
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0229 02:13:22.594784    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:22.594784    8584 main.go:141] libmachine: Writing magic tar header
	I0229 02:13:22.594784    8584 main.go:141] libmachine: Writing SSH key tar header
	I0229 02:13:22.604709    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\disk.vhd' -VHDType Dynamic -DeleteSource
	I0229 02:13:25.650762    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:13:25.650762    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:25.650762    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\disk.vhd' -SizeBytes 20000MB
	I0229 02:13:28.088594    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:13:28.088773    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:28.088918    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-314500 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0229 02:13:31.464130    8584 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-314500 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0229 02:13:31.464130    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:31.464846    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-314500 -DynamicMemoryEnabled $false
	I0229 02:13:33.602734    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:13:33.602734    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:33.602734    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-314500 -Count 2
	I0229 02:13:35.681481    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:13:35.682414    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:35.682502    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-314500 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\boot2docker.iso'
	I0229 02:13:38.162637    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:13:38.162637    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:38.163401    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-314500 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\disk.vhd'
	I0229 02:13:40.645938    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:13:40.646015    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:40.646015    8584 main.go:141] libmachine: Starting VM...
	I0229 02:13:40.646015    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-314500
	I0229 02:13:43.355580    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:13:43.355580    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:43.355580    8584 main.go:141] libmachine: Waiting for host to start...
	I0229 02:13:43.355580    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:13:45.477300    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:13:45.477397    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:45.477397    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:13:47.817639    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:13:47.817639    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:48.829666    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:13:50.912195    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:13:50.912241    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:50.912370    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:13:53.314227    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:13:53.314300    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:54.326584    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:13:56.402395    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:13:56.403080    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:56.403237    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:13:58.748206    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:13:58.748429    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:59.750928    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:14:01.825704    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:14:01.825704    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:01.826435    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:14:04.171500    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:14:04.171557    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:05.181274    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:14:07.245329    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:14:07.245623    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:07.245781    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:14:09.720669    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:14:09.720669    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:09.721021    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:14:11.754505    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:14:11.755426    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:11.755426    8584 machine.go:88] provisioning docker machine ...
	I0229 02:14:11.755516    8584 buildroot.go:166] provisioning hostname "multinode-314500"
	I0229 02:14:11.755562    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:14:13.804208    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:14:13.804208    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:13.804335    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:14:16.247231    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:14:16.248239    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:16.254331    8584 main.go:141] libmachine: Using SSH client type: native
	I0229 02:14:16.267585    8584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.2.165 22 <nil> <nil>}
	I0229 02:14:16.267585    8584 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-314500 && echo "multinode-314500" | sudo tee /etc/hostname
	I0229 02:14:16.424392    8584 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-314500
	
	I0229 02:14:16.424516    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:14:18.448299    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:14:18.448299    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:18.448830    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:14:20.858056    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:14:20.858056    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:20.863979    8584 main.go:141] libmachine: Using SSH client type: native
	I0229 02:14:20.864174    8584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.2.165 22 <nil> <nil>}
	I0229 02:14:20.864174    8584 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-314500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-314500/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-314500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:14:21.010675    8584 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:14:21.010763    8584 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0229 02:14:21.010763    8584 buildroot.go:174] setting up certificates
	I0229 02:14:21.010852    8584 provision.go:83] configureAuth start
	I0229 02:14:21.011112    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:14:22.998181    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:14:22.998447    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:22.998552    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:14:25.432573    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:14:25.432573    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:25.433124    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:14:27.425883    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:14:27.426494    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:27.426494    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:14:29.833478    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:14:29.833478    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:29.833478    8584 provision.go:138] copyHostCerts
	I0229 02:14:29.834264    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0229 02:14:29.834264    8584 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0229 02:14:29.834264    8584 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0229 02:14:29.834791    8584 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0229 02:14:29.835948    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0229 02:14:29.836088    8584 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0229 02:14:29.836088    8584 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0229 02:14:29.836088    8584 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0229 02:14:29.837182    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0229 02:14:29.837305    8584 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0229 02:14:29.837396    8584 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0229 02:14:29.837627    8584 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1675 bytes)
	I0229 02:14:29.838481    8584 provision.go:112] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-314500 san=[172.19.2.165 172.19.2.165 localhost 127.0.0.1 minikube multinode-314500]
	I0229 02:14:29.990342    8584 provision.go:172] copyRemoteCerts
	I0229 02:14:29.998349    8584 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:14:29.999347    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:14:32.015676    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:14:32.015676    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:32.016407    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:14:34.434860    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:14:34.435751    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:34.435751    8584 sshutil.go:53] new ssh client: &{IP:172.19.2.165 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\id_rsa Username:docker}
	I0229 02:14:34.540272    8584 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5416689s)
	I0229 02:14:34.540378    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0229 02:14:34.540655    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 02:14:34.589037    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0229 02:14:34.589037    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1224 bytes)
	I0229 02:14:34.637988    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0229 02:14:34.638288    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 02:14:34.684997    8584 provision.go:86] duration metric: configureAuth took 13.6732738s
	I0229 02:14:34.684997    8584 buildroot.go:189] setting minikube options for container-runtime
	I0229 02:14:34.685957    8584 config.go:182] Loaded profile config "multinode-314500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 02:14:34.685957    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:14:36.732569    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:14:36.732569    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:36.732893    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:14:39.171929    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:14:39.171986    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:39.176641    8584 main.go:141] libmachine: Using SSH client type: native
	I0229 02:14:39.177166    8584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.2.165 22 <nil> <nil>}
	I0229 02:14:39.177237    8584 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 02:14:39.296794    8584 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0229 02:14:39.296888    8584 buildroot.go:70] root file system type: tmpfs
	I0229 02:14:39.296957    8584 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 02:14:39.296957    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:14:41.315910    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:14:41.315910    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:41.315910    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:14:43.719853    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:14:43.720852    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:43.725258    8584 main.go:141] libmachine: Using SSH client type: native
	I0229 02:14:43.725666    8584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.2.165 22 <nil> <nil>}
	I0229 02:14:43.725666    8584 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 02:14:43.881883    8584 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 02:14:43.882199    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:14:45.916519    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:14:45.916519    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:45.917559    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:14:48.351202    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:14:48.351586    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:48.356595    8584 main.go:141] libmachine: Using SSH client type: native
	I0229 02:14:48.356668    8584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.2.165 22 <nil> <nil>}
	I0229 02:14:48.356668    8584 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 02:14:49.392262    8584 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0229 02:14:49.392262    8584 machine.go:91] provisioned docker machine in 37.6347323s
	I0229 02:14:49.392262    8584 client.go:171] LocalClient.Create took 1m44.1265457s
	I0229 02:14:49.392262    8584 start.go:167] duration metric: libmachine.API.Create for "multinode-314500" took 1m44.1265457s
	I0229 02:14:49.392262    8584 start.go:300] post-start starting for "multinode-314500" (driver="hyperv")
	I0229 02:14:49.393258    8584 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:14:49.402259    8584 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:14:49.402259    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:14:51.395389    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:14:51.395616    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:51.395690    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:14:53.788270    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:14:53.788752    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:53.789362    8584 sshutil.go:53] new ssh client: &{IP:172.19.2.165 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\id_rsa Username:docker}
	I0229 02:14:53.893141    8584 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.490524s)
	I0229 02:14:53.905375    8584 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:14:53.912851    8584 command_runner.go:130] > NAME=Buildroot
	I0229 02:14:53.912851    8584 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0229 02:14:53.912851    8584 command_runner.go:130] > ID=buildroot
	I0229 02:14:53.912851    8584 command_runner.go:130] > VERSION_ID=2023.02.9
	I0229 02:14:53.912851    8584 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0229 02:14:53.912851    8584 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 02:14:53.912851    8584 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0229 02:14:53.913631    8584 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0229 02:14:53.914277    8584 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem -> 33122.pem in /etc/ssl/certs
	I0229 02:14:53.914277    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem -> /etc/ssl/certs/33122.pem
	I0229 02:14:53.923918    8584 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 02:14:53.943567    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem --> /etc/ssl/certs/33122.pem (1708 bytes)
	I0229 02:14:53.989666    8584 start.go:303] post-start completed in 4.5952349s
	I0229 02:14:53.991784    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:14:55.999148    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:14:55.999350    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:55.999350    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:14:58.385355    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:14:58.385355    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:58.385948    8584 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\config.json ...
	I0229 02:14:58.389663    8584 start.go:128] duration metric: createHost completed in 1m53.1242572s
	I0229 02:14:58.389764    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:15:00.365905    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:15:00.365905    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:15:00.365905    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:15:02.777961    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:15:02.777961    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:15:02.782646    8584 main.go:141] libmachine: Using SSH client type: native
	I0229 02:15:02.783280    8584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.2.165 22 <nil> <nil>}
	I0229 02:15:02.783280    8584 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 02:15:02.899664    8584 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709172903.069532857
	
	I0229 02:15:02.899664    8584 fix.go:206] guest clock: 1709172903.069532857
	I0229 02:15:02.899664    8584 fix.go:219] Guest: 2024-02-29 02:15:03.069532857 +0000 UTC Remote: 2024-02-29 02:14:58.3896639 +0000 UTC m=+118.373915301 (delta=4.679868957s)
	I0229 02:15:02.899873    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:15:04.946764    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:15:04.946764    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:15:04.946764    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:15:07.386956    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:15:07.386956    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:15:07.391193    8584 main.go:141] libmachine: Using SSH client type: native
	I0229 02:15:07.391193    8584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.2.165 22 <nil> <nil>}
	I0229 02:15:07.391193    8584 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709172902
	I0229 02:15:07.538124    8584 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Feb 29 02:15:02 UTC 2024
	
	I0229 02:15:07.538124    8584 fix.go:226] clock set: Thu Feb 29 02:15:02 UTC 2024
	 (err=<nil>)
	I0229 02:15:07.538124    8584 start.go:83] releasing machines lock for "multinode-314500", held for 2m2.2725929s
	I0229 02:15:07.538124    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:15:09.578277    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:15:09.578277    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:15:09.578477    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:15:12.017474    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:15:12.017474    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:15:12.020803    8584 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:15:12.020938    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:15:12.028085    8584 ssh_runner.go:195] Run: cat /version.json
	I0229 02:15:12.028085    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:15:14.106976    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:15:14.106976    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:15:14.107962    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:15:14.108048    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:15:14.108166    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:15:14.108210    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:15:16.599162    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:15:16.599162    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:15:16.599717    8584 sshutil.go:53] new ssh client: &{IP:172.19.2.165 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\id_rsa Username:docker}
	I0229 02:15:16.624118    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:15:16.624199    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:15:16.624505    8584 sshutil.go:53] new ssh client: &{IP:172.19.2.165 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\id_rsa Username:docker}
	I0229 02:15:16.878087    8584 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0229 02:15:16.878258    8584 command_runner.go:130] > {"iso_version": "v1.32.1-1708638130-18020", "kicbase_version": "v0.0.42-1708008208-17936", "minikube_version": "v1.32.0", "commit": "d80143d2abd5a004b09b48bbc118a104326900af"}
	I0229 02:15:16.878258    8584 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.8570973s)
	I0229 02:15:16.878258    8584 ssh_runner.go:235] Completed: cat /version.json: (4.8499018s)
	I0229 02:15:16.891953    8584 ssh_runner.go:195] Run: systemctl --version
	I0229 02:15:16.901191    8584 command_runner.go:130] > systemd 252 (252)
	I0229 02:15:16.901288    8584 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0229 02:15:16.911194    8584 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0229 02:15:16.920182    8584 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0229 02:15:16.920182    8584 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 02:15:16.929614    8584 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:15:16.958720    8584 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0229 02:15:16.958791    8584 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 02:15:16.958791    8584 start.go:475] detecting cgroup driver to use...
	I0229 02:15:16.958791    8584 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:15:16.993577    8584 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0229 02:15:17.006166    8584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0229 02:15:17.036528    8584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 02:15:17.056400    8584 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 02:15:17.066084    8584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 02:15:17.094368    8584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 02:15:17.125650    8584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 02:15:17.155407    8584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 02:15:17.184091    8584 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:15:17.211981    8584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 02:15:17.240589    8584 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:15:17.258992    8584 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0229 02:15:17.271051    8584 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:15:17.301079    8584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:15:17.510984    8584 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 02:15:17.540848    8584 start.go:475] detecting cgroup driver to use...
	I0229 02:15:17.549602    8584 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 02:15:17.574482    8584 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0229 02:15:17.574482    8584 command_runner.go:130] > [Unit]
	I0229 02:15:17.574482    8584 command_runner.go:130] > Description=Docker Application Container Engine
	I0229 02:15:17.574482    8584 command_runner.go:130] > Documentation=https://docs.docker.com
	I0229 02:15:17.574482    8584 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0229 02:15:17.574482    8584 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0229 02:15:17.574482    8584 command_runner.go:130] > StartLimitBurst=3
	I0229 02:15:17.574482    8584 command_runner.go:130] > StartLimitIntervalSec=60
	I0229 02:15:17.574482    8584 command_runner.go:130] > [Service]
	I0229 02:15:17.574482    8584 command_runner.go:130] > Type=notify
	I0229 02:15:17.574482    8584 command_runner.go:130] > Restart=on-failure
	I0229 02:15:17.574482    8584 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0229 02:15:17.574482    8584 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0229 02:15:17.574482    8584 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0229 02:15:17.574482    8584 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0229 02:15:17.574482    8584 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0229 02:15:17.574482    8584 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0229 02:15:17.574482    8584 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0229 02:15:17.574482    8584 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0229 02:15:17.574482    8584 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0229 02:15:17.574482    8584 command_runner.go:130] > ExecStart=
	I0229 02:15:17.574482    8584 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0229 02:15:17.574482    8584 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0229 02:15:17.574482    8584 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0229 02:15:17.574482    8584 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0229 02:15:17.574482    8584 command_runner.go:130] > LimitNOFILE=infinity
	I0229 02:15:17.574482    8584 command_runner.go:130] > LimitNPROC=infinity
	I0229 02:15:17.574482    8584 command_runner.go:130] > LimitCORE=infinity
	I0229 02:15:17.574482    8584 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0229 02:15:17.574482    8584 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0229 02:15:17.574482    8584 command_runner.go:130] > TasksMax=infinity
	I0229 02:15:17.574482    8584 command_runner.go:130] > TimeoutStartSec=0
	I0229 02:15:17.574482    8584 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0229 02:15:17.574482    8584 command_runner.go:130] > Delegate=yes
	I0229 02:15:17.574482    8584 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0229 02:15:17.574482    8584 command_runner.go:130] > KillMode=process
	I0229 02:15:17.574482    8584 command_runner.go:130] > [Install]
	I0229 02:15:17.574482    8584 command_runner.go:130] > WantedBy=multi-user.target
	I0229 02:15:17.584629    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:15:17.616355    8584 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 02:15:17.657950    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:15:17.693651    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 02:15:17.729096    8584 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 02:15:17.784099    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 02:15:17.808125    8584 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:15:17.842233    8584 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0229 02:15:17.851465    8584 ssh_runner.go:195] Run: which cri-dockerd
	I0229 02:15:17.862101    8584 command_runner.go:130] > /usr/bin/cri-dockerd
	I0229 02:15:17.871161    8584 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 02:15:17.889692    8584 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0229 02:15:17.933551    8584 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 02:15:18.134287    8584 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 02:15:18.310331    8584 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 02:15:18.310331    8584 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 02:15:18.357955    8584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:15:18.552365    8584 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 02:15:20.070091    8584 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5176409s)
	I0229 02:15:20.081202    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0229 02:15:20.122115    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 02:15:20.159070    8584 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0229 02:15:20.360745    8584 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0229 02:15:20.562103    8584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:15:20.747807    8584 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0229 02:15:20.790021    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 02:15:20.823798    8584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:15:21.024568    8584 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0229 02:15:21.124460    8584 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0229 02:15:21.138536    8584 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0229 02:15:21.147715    8584 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0229 02:15:21.147715    8584 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0229 02:15:21.147715    8584 command_runner.go:130] > Device: 0,22	Inode: 889         Links: 1
	I0229 02:15:21.147715    8584 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0229 02:15:21.147715    8584 command_runner.go:130] > Access: 2024-02-29 02:15:21.219763442 +0000
	I0229 02:15:21.147715    8584 command_runner.go:130] > Modify: 2024-02-29 02:15:21.219763442 +0000
	I0229 02:15:21.147715    8584 command_runner.go:130] > Change: 2024-02-29 02:15:21.223763631 +0000
	I0229 02:15:21.147715    8584 command_runner.go:130] >  Birth: -
	I0229 02:15:21.147715    8584 start.go:543] Will wait 60s for crictl version
	I0229 02:15:21.160607    8584 ssh_runner.go:195] Run: which crictl
	I0229 02:15:21.166613    8584 command_runner.go:130] > /usr/bin/crictl
	I0229 02:15:21.175685    8584 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 02:15:21.243995    8584 command_runner.go:130] > Version:  0.1.0
	I0229 02:15:21.244098    8584 command_runner.go:130] > RuntimeName:  docker
	I0229 02:15:21.244098    8584 command_runner.go:130] > RuntimeVersion:  24.0.7
	I0229 02:15:21.244098    8584 command_runner.go:130] > RuntimeApiVersion:  v1
	I0229 02:15:21.244098    8584 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0229 02:15:21.252876    8584 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 02:15:21.284945    8584 command_runner.go:130] > 24.0.7
	I0229 02:15:21.293857    8584 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 02:15:21.328569    8584 command_runner.go:130] > 24.0.7
	I0229 02:15:21.329772    8584 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0229 02:15:21.329981    8584 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0229 02:15:21.335723    8584 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0229 02:15:21.335723    8584 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0229 02:15:21.335723    8584 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0229 02:15:21.335830    8584 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:a6:a3:c1 Flags:up|broadcast|multicast|running}
	I0229 02:15:21.339030    8584 ip.go:210] interface addr: fe80::fc78:4865:5cac:d448/64
	I0229 02:15:21.339030    8584 ip.go:210] interface addr: 172.19.0.1/20
	I0229 02:15:21.346674    8584 ssh_runner.go:195] Run: grep 172.19.0.1	host.minikube.internal$ /etc/hosts
	I0229 02:15:21.352657    8584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:15:21.374301    8584 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 02:15:21.380708    8584 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 02:15:21.407908    8584 docker.go:685] Got preloaded images: 
	I0229 02:15:21.407908    8584 docker.go:691] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I0229 02:15:21.417190    8584 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0229 02:15:21.434433    8584 command_runner.go:139] > {"Repositories":{}}
	I0229 02:15:21.444446    8584 ssh_runner.go:195] Run: which lz4
	I0229 02:15:21.452611    8584 command_runner.go:130] > /usr/bin/lz4
	I0229 02:15:21.453860    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0229 02:15:21.463263    8584 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 02:15:21.469865    8584 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 02:15:21.470175    8584 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 02:15:21.470424    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I0229 02:15:23.210150    8584 docker.go:649] Took 1.755758 seconds to copy over tarball
	I0229 02:15:23.222182    8584 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 02:15:33.289701    8584 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (10.0669568s)
	I0229 02:15:33.289701    8584 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 02:15:33.357787    8584 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0229 02:15:33.376545    8584 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.10.1":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.9-0":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3":"sha256:73deb9a3f702532592a4167455f8
bf2e5f5d900bcc959ba2fd2d35c321de1af9"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.28.4":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.28.4":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.28.4":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021
a3a2899304398e"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.28.4":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0229 02:15:33.376717    8584 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0229 02:15:33.419432    8584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:15:33.617988    8584 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 02:15:35.620810    8584 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.0027096s)
	I0229 02:15:35.628068    8584 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 02:15:35.653067    8584 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0229 02:15:35.653067    8584 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0229 02:15:35.653067    8584 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0229 02:15:35.653067    8584 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0229 02:15:35.653067    8584 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0229 02:15:35.653067    8584 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0229 02:15:35.653067    8584 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0229 02:15:35.653067    8584 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:15:35.654344    8584 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0229 02:15:35.654416    8584 cache_images.go:84] Images are preloaded, skipping loading
	I0229 02:15:35.664071    8584 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0229 02:15:35.699171    8584 command_runner.go:130] > cgroupfs
	I0229 02:15:35.700391    8584 cni.go:84] Creating CNI manager for ""
	I0229 02:15:35.700684    8584 cni.go:136] 1 nodes found, recommending kindnet
	I0229 02:15:35.700684    8584 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 02:15:35.700770    8584 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.2.165 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-314500 NodeName:multinode-314500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.2.165"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.2.165 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 02:15:35.701130    8584 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.2.165
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-314500"
	  kubeletExtraArgs:
	    node-ip: 172.19.2.165
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.2.165"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 02:15:35.701263    8584 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-314500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.2.165
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-314500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 02:15:35.711763    8584 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 02:15:35.728898    8584 command_runner.go:130] > kubeadm
	I0229 02:15:35.728898    8584 command_runner.go:130] > kubectl
	I0229 02:15:35.728898    8584 command_runner.go:130] > kubelet
	I0229 02:15:35.728898    8584 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 02:15:35.737884    8584 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 02:15:35.754466    8584 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I0229 02:15:35.786652    8584 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 02:15:35.818096    8584 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0229 02:15:35.860377    8584 ssh_runner.go:195] Run: grep 172.19.2.165	control-plane.minikube.internal$ /etc/hosts
	I0229 02:15:35.867122    8584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.2.165	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:15:35.887430    8584 certs.go:56] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500 for IP: 172.19.2.165
	I0229 02:15:35.887430    8584 certs.go:190] acquiring lock for shared ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:15:35.888418    8584 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0229 02:15:35.888418    8584 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0229 02:15:35.889416    8584 certs.go:319] generating minikube-user signed cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\client.key
	I0229 02:15:35.889416    8584 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\client.crt with IP's: []
	I0229 02:15:36.213588    8584 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\client.crt ...
	I0229 02:15:36.213588    8584 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\client.crt: {Name:mk73b75f20ca1d2e0bec389400db48fd623b8015 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:15:36.214068    8584 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\client.key ...
	I0229 02:15:36.214068    8584 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\client.key: {Name:mkb1b1a5bd39eef2e9536007ed8aa8f214199fbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:15:36.215219    8584 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.key.3d9898f0
	I0229 02:15:36.215219    8584 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.crt.3d9898f0 with IP's: [172.19.2.165 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 02:15:36.494396    8584 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.crt.3d9898f0 ...
	I0229 02:15:36.494396    8584 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.crt.3d9898f0: {Name:mk936caf0d565f97194ec84a769f367930fe715a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:15:36.495081    8584 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.key.3d9898f0 ...
	I0229 02:15:36.496079    8584 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.key.3d9898f0: {Name:mkafd075e8297f3e248df3102b52bd4b41170a1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:15:36.496315    8584 certs.go:337] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.crt.3d9898f0 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.crt
	I0229 02:15:36.510316    8584 certs.go:341] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.key.3d9898f0 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.key
	I0229 02:15:36.510683    8584 certs.go:319] generating aggregator signed cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\proxy-client.key
	I0229 02:15:36.510683    8584 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\proxy-client.crt with IP's: []
	I0229 02:15:36.721693    8584 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\proxy-client.crt ...
	I0229 02:15:36.721693    8584 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\proxy-client.crt: {Name:mkd74b50be0a408b84b859db2dc4cdc2614195ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:15:36.723948    8584 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\proxy-client.key ...
	I0229 02:15:36.724009    8584 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\proxy-client.key: {Name:mk76464224e14bc795ee483f0f2ecb96ca808e2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:15:36.724747    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0229 02:15:36.724747    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0229 02:15:36.725273    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0229 02:15:36.735647    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0229 02:15:36.736197    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0229 02:15:36.736248    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0229 02:15:36.736248    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0229 02:15:36.736248    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0229 02:15:36.737101    8584 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3312.pem (1338 bytes)
	W0229 02:15:36.737357    8584 certs.go:433] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3312_empty.pem, impossibly tiny 0 bytes
	I0229 02:15:36.737357    8584 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0229 02:15:36.737357    8584 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0229 02:15:36.737906    8584 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0229 02:15:36.738244    8584 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0229 02:15:36.738845    8584 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem (1708 bytes)
	I0229 02:15:36.739105    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem -> /usr/share/ca-certificates/33122.pem
	I0229 02:15:36.739320    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:15:36.739481    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3312.pem -> /usr/share/ca-certificates/3312.pem
	I0229 02:15:36.740148    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 02:15:36.786597    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 02:15:36.830608    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 02:15:36.875812    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0229 02:15:36.921431    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 02:15:36.966942    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 02:15:37.013401    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 02:15:37.059070    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 02:15:37.106455    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem --> /usr/share/ca-certificates/33122.pem (1708 bytes)
	I0229 02:15:37.156672    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 02:15:37.203394    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3312.pem --> /usr/share/ca-certificates/3312.pem (1338 bytes)
	I0229 02:15:37.251707    8584 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 02:15:37.295710    8584 ssh_runner.go:195] Run: openssl version
	I0229 02:15:37.305455    8584 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0229 02:15:37.316796    8584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/33122.pem && ln -fs /usr/share/ca-certificates/33122.pem /etc/ssl/certs/33122.pem"
	I0229 02:15:37.346166    8584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/33122.pem
	I0229 02:15:37.353171    8584 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 29 00:59 /usr/share/ca-certificates/33122.pem
	I0229 02:15:37.354028    8584 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 00:59 /usr/share/ca-certificates/33122.pem
	I0229 02:15:37.362846    8584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/33122.pem
	I0229 02:15:37.373491    8584 command_runner.go:130] > 3ec20f2e
	I0229 02:15:37.385486    8584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/33122.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 02:15:37.415489    8584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 02:15:37.444489    8584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:15:37.451960    8584 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 29 00:45 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:15:37.451960    8584 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 00:45 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:15:37.460116    8584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:15:37.469671    8584 command_runner.go:130] > b5213941
	I0229 02:15:37.480093    8584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 02:15:37.508112    8584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3312.pem && ln -fs /usr/share/ca-certificates/3312.pem /etc/ssl/certs/3312.pem"
	I0229 02:15:37.535081    8584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3312.pem
	I0229 02:15:37.542076    8584 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 29 00:59 /usr/share/ca-certificates/3312.pem
	I0229 02:15:37.542657    8584 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 00:59 /usr/share/ca-certificates/3312.pem
	I0229 02:15:37.552276    8584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3312.pem
	I0229 02:15:37.561453    8584 command_runner.go:130] > 51391683
	I0229 02:15:37.570468    8584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3312.pem /etc/ssl/certs/51391683.0"
	I0229 02:15:37.599088    8584 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 02:15:37.607208    8584 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 02:15:37.607208    8584 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 02:15:37.607627    8584 kubeadm.go:404] StartCluster: {Name:multinode-314500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.28.4 ClusterName:multinode-314500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.19.2.165 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:15:37.614406    8584 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 02:15:37.651041    8584 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 02:15:37.669431    8584 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0229 02:15:37.669431    8584 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0229 02:15:37.669431    8584 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0229 02:15:37.679297    8584 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:15:37.704096    8584 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:15:37.722096    8584 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0229 02:15:37.722096    8584 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0229 02:15:37.722096    8584 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0229 02:15:37.722096    8584 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:15:37.723135    8584 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:15:37.723135    8584 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0229 02:15:38.381888    8584 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:15:38.381962    8584 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:15:51.901148    8584 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0229 02:15:51.901148    8584 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I0229 02:15:51.901148    8584 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 02:15:51.901148    8584 command_runner.go:130] > [preflight] Running pre-flight checks
	I0229 02:15:51.901731    8584 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:15:51.901731    8584 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:15:51.901836    8584 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:15:51.901836    8584 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:15:51.902556    8584 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:15:51.902556    8584 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:15:51.902691    8584 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:15:51.902691    8584 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:15:51.903567    8584 out.go:204]   - Generating certificates and keys ...
	I0229 02:15:51.903626    8584 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 02:15:51.903626    8584 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0229 02:15:51.903626    8584 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 02:15:51.903626    8584 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0229 02:15:51.904297    8584 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 02:15:51.904297    8584 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 02:15:51.904297    8584 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0229 02:15:51.904297    8584 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0229 02:15:51.904297    8584 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0229 02:15:51.904297    8584 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0229 02:15:51.904906    8584 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0229 02:15:51.904937    8584 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0229 02:15:51.905063    8584 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0229 02:15:51.905063    8584 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0229 02:15:51.905063    8584 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-314500] and IPs [172.19.2.165 127.0.0.1 ::1]
	I0229 02:15:51.905063    8584 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-314500] and IPs [172.19.2.165 127.0.0.1 ::1]
	I0229 02:15:51.905063    8584 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0229 02:15:51.905595    8584 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0229 02:15:51.905775    8584 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-314500] and IPs [172.19.2.165 127.0.0.1 ::1]
	I0229 02:15:51.905775    8584 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-314500] and IPs [172.19.2.165 127.0.0.1 ::1]
	I0229 02:15:51.905775    8584 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 02:15:51.905775    8584 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 02:15:51.906311    8584 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 02:15:51.906311    8584 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 02:15:51.906451    8584 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0229 02:15:51.906451    8584 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0229 02:15:51.906648    8584 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:15:51.906648    8584 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:15:51.906648    8584 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:15:51.906648    8584 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:15:51.906648    8584 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:15:51.906648    8584 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:15:51.907239    8584 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:15:51.907322    8584 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:15:51.907444    8584 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:15:51.907444    8584 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:15:51.907639    8584 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:15:51.907639    8584 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:15:51.907772    8584 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:15:51.907840    8584 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:15:51.908342    8584 out.go:204]   - Booting up control plane ...
	I0229 02:15:51.908342    8584 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:15:51.908342    8584 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:15:51.908868    8584 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:15:51.908868    8584 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:15:51.908983    8584 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:15:51.909056    8584 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:15:51.909179    8584 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:15:51.909179    8584 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:15:51.909179    8584 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:15:51.909179    8584 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:15:51.909179    8584 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 02:15:51.909179    8584 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0229 02:15:51.909950    8584 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:15:51.909950    8584 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:15:51.910229    8584 command_runner.go:130] > [apiclient] All control plane components are healthy after 7.507183 seconds
	I0229 02:15:51.910229    8584 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.507183 seconds
	I0229 02:15:51.910438    8584 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0229 02:15:51.910552    8584 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0229 02:15:51.910616    8584 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0229 02:15:51.910616    8584 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0229 02:15:51.910616    8584 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0229 02:15:51.911258    8584 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0229 02:15:51.911797    8584 command_runner.go:130] > [mark-control-plane] Marking the node multinode-314500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0229 02:15:51.911912    8584 kubeadm.go:322] [mark-control-plane] Marking the node multinode-314500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0229 02:15:51.911912    8584 command_runner.go:130] > [bootstrap-token] Using token: 0hv5co.fj6ugwf787q3himr
	I0229 02:15:51.911912    8584 kubeadm.go:322] [bootstrap-token] Using token: 0hv5co.fj6ugwf787q3himr
	I0229 02:15:51.912545    8584 out.go:204]   - Configuring RBAC rules ...
	I0229 02:15:51.912545    8584 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0229 02:15:51.912545    8584 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0229 02:15:51.912545    8584 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0229 02:15:51.913096    8584 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0229 02:15:51.913282    8584 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0229 02:15:51.913282    8584 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0229 02:15:51.913282    8584 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0229 02:15:51.913282    8584 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0229 02:15:51.913282    8584 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0229 02:15:51.913282    8584 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0229 02:15:51.913282    8584 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0229 02:15:51.913282    8584 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0229 02:15:51.914161    8584 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0229 02:15:51.914161    8584 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0229 02:15:51.914161    8584 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0229 02:15:51.914161    8584 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0229 02:15:51.914161    8584 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0229 02:15:51.914161    8584 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0229 02:15:51.914161    8584 kubeadm.go:322] 
	I0229 02:15:51.914161    8584 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0229 02:15:51.914161    8584 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0229 02:15:51.914161    8584 kubeadm.go:322] 
	I0229 02:15:51.914161    8584 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0229 02:15:51.914161    8584 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0229 02:15:51.914161    8584 kubeadm.go:322] 
	I0229 02:15:51.914161    8584 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0229 02:15:51.914161    8584 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0229 02:15:51.914161    8584 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0229 02:15:51.914161    8584 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0229 02:15:51.914161    8584 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0229 02:15:51.915155    8584 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0229 02:15:51.915155    8584 kubeadm.go:322] 
	I0229 02:15:51.915155    8584 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0229 02:15:51.915155    8584 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0229 02:15:51.915155    8584 kubeadm.go:322] 
	I0229 02:15:51.915155    8584 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0229 02:15:51.915155    8584 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0229 02:15:51.915155    8584 kubeadm.go:322] 
	I0229 02:15:51.915155    8584 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0229 02:15:51.915155    8584 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0229 02:15:51.915155    8584 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0229 02:15:51.915155    8584 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0229 02:15:51.915155    8584 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0229 02:15:51.915155    8584 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0229 02:15:51.915155    8584 kubeadm.go:322] 
	I0229 02:15:51.915155    8584 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0229 02:15:51.915155    8584 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0229 02:15:51.915155    8584 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0229 02:15:51.915155    8584 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0229 02:15:51.916151    8584 kubeadm.go:322] 
	I0229 02:15:51.916151    8584 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 0hv5co.fj6ugwf787q3himr \
	I0229 02:15:51.916151    8584 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 0hv5co.fj6ugwf787q3himr \
	I0229 02:15:51.916151    8584 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:9c722bf1323b6c4442b9327af3863f0d7e41785d89e27c3b473d4929b028e022 \
	I0229 02:15:51.916151    8584 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:9c722bf1323b6c4442b9327af3863f0d7e41785d89e27c3b473d4929b028e022 \
	I0229 02:15:51.916151    8584 command_runner.go:130] > 	--control-plane 
	I0229 02:15:51.916151    8584 kubeadm.go:322] 	--control-plane 
	I0229 02:15:51.916151    8584 kubeadm.go:322] 
	I0229 02:15:51.916151    8584 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0229 02:15:51.916151    8584 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0229 02:15:51.916151    8584 kubeadm.go:322] 
	I0229 02:15:51.916151    8584 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 0hv5co.fj6ugwf787q3himr \
	I0229 02:15:51.916151    8584 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 0hv5co.fj6ugwf787q3himr \
	I0229 02:15:51.917165    8584 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:9c722bf1323b6c4442b9327af3863f0d7e41785d89e27c3b473d4929b028e022 
	I0229 02:15:51.917165    8584 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:9c722bf1323b6c4442b9327af3863f0d7e41785d89e27c3b473d4929b028e022 
	I0229 02:15:51.917165    8584 cni.go:84] Creating CNI manager for ""
	I0229 02:15:51.917165    8584 cni.go:136] 1 nodes found, recommending kindnet
	I0229 02:15:51.917165    8584 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0229 02:15:51.926742    8584 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0229 02:15:51.933753    8584 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0229 02:15:51.933753    8584 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0229 02:15:51.933753    8584 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0229 02:15:51.933753    8584 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0229 02:15:51.933753    8584 command_runner.go:130] > Access: 2024-02-29 02:14:07.987005400 +0000
	I0229 02:15:51.933753    8584 command_runner.go:130] > Modify: 2024-02-23 03:39:37.000000000 +0000
	I0229 02:15:51.933753    8584 command_runner.go:130] > Change: 2024-02-29 02:13:59.368000000 +0000
	I0229 02:15:51.933753    8584 command_runner.go:130] >  Birth: -
	I0229 02:15:51.934743    8584 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0229 02:15:51.934743    8584 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0229 02:15:51.986743    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0229 02:15:53.339082    8584 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0229 02:15:53.347087    8584 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0229 02:15:53.357471    8584 command_runner.go:130] > serviceaccount/kindnet created
	I0229 02:15:53.372482    8584 command_runner.go:130] > daemonset.apps/kindnet created
	I0229 02:15:53.376817    8584 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.3899963s)
	I0229 02:15:53.376885    8584 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 02:15:53.387776    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:15:53.389804    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61 minikube.k8s.io/name=multinode-314500 minikube.k8s.io/updated_at=2024_02_29T02_15_53_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:15:53.410555    8584 command_runner.go:130] > -16
	I0229 02:15:53.410635    8584 ops.go:34] apiserver oom_adj: -16
	I0229 02:15:53.572950    8584 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0229 02:15:53.573242    8584 command_runner.go:130] > node/multinode-314500 labeled
	I0229 02:15:53.583665    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:15:53.702923    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:15:54.086498    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:15:54.213077    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:15:54.589736    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:15:54.707092    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:15:55.094365    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:15:55.219281    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:15:55.594452    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:15:55.714603    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:15:56.086985    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:15:56.210093    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:15:56.594292    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:15:56.710854    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:15:57.092717    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:15:57.202893    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:15:57.596461    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:15:57.709250    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:15:58.097022    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:15:58.207043    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:15:58.585505    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:15:58.700383    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:15:59.087317    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:15:59.199211    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:15:59.589420    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:15:59.709521    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:16:00.099207    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:16:00.248193    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:16:00.587996    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:16:00.710610    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:16:01.089490    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:16:01.210939    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:16:01.588438    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:16:01.719364    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:16:02.095606    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:16:02.219852    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:16:02.583712    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:16:02.688720    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:16:03.085804    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:16:03.198833    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:16:03.589679    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:16:03.697234    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:16:04.094021    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:16:04.277722    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:16:04.585546    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:16:04.713527    8584 command_runner.go:130] > NAME      SECRETS   AGE
	I0229 02:16:04.713527    8584 command_runner.go:130] > default   0         0s
	I0229 02:16:04.713527    8584 kubeadm.go:1088] duration metric: took 11.3359271s to wait for elevateKubeSystemPrivileges.
	I0229 02:16:04.713527    8584 kubeadm.go:406] StartCluster complete in 27.1044579s
	I0229 02:16:04.713527    8584 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:16:04.713527    8584 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 02:16:04.714507    8584 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:16:04.716496    8584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 02:16:04.716496    8584 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 02:16:04.716496    8584 addons.go:69] Setting storage-provisioner=true in profile "multinode-314500"
	I0229 02:16:04.716496    8584 addons.go:234] Setting addon storage-provisioner=true in "multinode-314500"
	I0229 02:16:04.716496    8584 addons.go:69] Setting default-storageclass=true in profile "multinode-314500"
	I0229 02:16:04.716496    8584 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-314500"
	I0229 02:16:04.716496    8584 config.go:182] Loaded profile config "multinode-314500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 02:16:04.716496    8584 host.go:66] Checking if "multinode-314500" exists ...
	I0229 02:16:04.717509    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:16:04.718505    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:16:04.730512    8584 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 02:16:04.731520    8584 kapi.go:59] client config for multinode-314500: &rest.Config{Host:"https://172.19.2.165:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2480600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 02:16:04.732504    8584 cert_rotation.go:137] Starting client certificate rotation controller
	I0229 02:16:04.732504    8584 round_trippers.go:463] GET https://172.19.2.165:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0229 02:16:04.733522    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:04.733522    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:04.733522    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:04.749641    8584 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0229 02:16:04.750464    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:04.750464    8584 round_trippers.go:580]     Audit-Id: 9956226a-c219-49d1-8683-804ff4a7c6af
	I0229 02:16:04.750525    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:04.750525    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:04.750525    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:04.750525    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:04.750525    8584 round_trippers.go:580]     Content-Length: 291
	I0229 02:16:04.750525    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:04 GMT
	I0229 02:16:04.750525    8584 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b4cd7015-a823-43da-bf82-ae91c5678262","resourceVersion":"255","creationTimestamp":"2024-02-29T02:15:51Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0229 02:16:04.751271    8584 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b4cd7015-a823-43da-bf82-ae91c5678262","resourceVersion":"255","creationTimestamp":"2024-02-29T02:15:51Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0229 02:16:04.751368    8584 round_trippers.go:463] PUT https://172.19.2.165:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0229 02:16:04.751368    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:04.751368    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:04.751368    8584 round_trippers.go:473]     Content-Type: application/json
	I0229 02:16:04.751368    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:04.770121    8584 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0229 02:16:04.770435    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:04.770435    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:04.770435    8584 round_trippers.go:580]     Content-Length: 291
	I0229 02:16:04.770435    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:04 GMT
	I0229 02:16:04.770435    8584 round_trippers.go:580]     Audit-Id: 926adfd2-ba76-4038-9182-d6c558cc8d06
	I0229 02:16:04.770435    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:04.770518    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:04.770518    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:04.770518    8584 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b4cd7015-a823-43da-bf82-ae91c5678262","resourceVersion":"337","creationTimestamp":"2024-02-29T02:15:51Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0229 02:16:04.883459    8584 command_runner.go:130] > apiVersion: v1
	I0229 02:16:04.883862    8584 command_runner.go:130] > data:
	I0229 02:16:04.883862    8584 command_runner.go:130] >   Corefile: |
	I0229 02:16:04.884003    8584 command_runner.go:130] >     .:53 {
	I0229 02:16:04.884003    8584 command_runner.go:130] >         errors
	I0229 02:16:04.884003    8584 command_runner.go:130] >         health {
	I0229 02:16:04.884003    8584 command_runner.go:130] >            lameduck 5s
	I0229 02:16:04.884003    8584 command_runner.go:130] >         }
	I0229 02:16:04.884126    8584 command_runner.go:130] >         ready
	I0229 02:16:04.884188    8584 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0229 02:16:04.884188    8584 command_runner.go:130] >            pods insecure
	I0229 02:16:04.884188    8584 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0229 02:16:04.884188    8584 command_runner.go:130] >            ttl 30
	I0229 02:16:04.884188    8584 command_runner.go:130] >         }
	I0229 02:16:04.884188    8584 command_runner.go:130] >         prometheus :9153
	I0229 02:16:04.884188    8584 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0229 02:16:04.884188    8584 command_runner.go:130] >            max_concurrent 1000
	I0229 02:16:04.884188    8584 command_runner.go:130] >         }
	I0229 02:16:04.884188    8584 command_runner.go:130] >         cache 30
	I0229 02:16:04.884188    8584 command_runner.go:130] >         loop
	I0229 02:16:04.884188    8584 command_runner.go:130] >         reload
	I0229 02:16:04.884188    8584 command_runner.go:130] >         loadbalance
	I0229 02:16:04.884188    8584 command_runner.go:130] >     }
	I0229 02:16:04.884188    8584 command_runner.go:130] > kind: ConfigMap
	I0229 02:16:04.884188    8584 command_runner.go:130] > metadata:
	I0229 02:16:04.884188    8584 command_runner.go:130] >   creationTimestamp: "2024-02-29T02:15:51Z"
	I0229 02:16:04.884188    8584 command_runner.go:130] >   name: coredns
	I0229 02:16:04.884188    8584 command_runner.go:130] >   namespace: kube-system
	I0229 02:16:04.884188    8584 command_runner.go:130] >   resourceVersion: "251"
	I0229 02:16:04.884188    8584 command_runner.go:130] >   uid: 3fc93d17-14a4-4d49-9f77-f2cd8adceaed
	I0229 02:16:04.887987    8584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.19.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0229 02:16:05.242860    8584 round_trippers.go:463] GET https://172.19.2.165:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0229 02:16:05.242860    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:05.242860    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:05.242860    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:05.287074    8584 round_trippers.go:574] Response Status: 200 OK in 43 milliseconds
	I0229 02:16:05.287143    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:05.287143    8584 round_trippers.go:580]     Content-Length: 291
	I0229 02:16:05.287213    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:05 GMT
	I0229 02:16:05.287213    8584 round_trippers.go:580]     Audit-Id: e6e6cf94-608a-4333-ac18-3d38f86552f2
	I0229 02:16:05.287213    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:05.287213    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:05.287213    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:05.287213    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:05.289816    8584 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b4cd7015-a823-43da-bf82-ae91c5678262","resourceVersion":"367","creationTimestamp":"2024-02-29T02:15:51Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0229 02:16:05.290759    8584 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-314500" context rescaled to 1 replicas
	I0229 02:16:05.290835    8584 start.go:223] Will wait 6m0s for node &{Name: IP:172.19.2.165 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 02:16:05.291722    8584 out.go:177] * Verifying Kubernetes components...
	I0229 02:16:05.303433    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:16:05.612313    8584 command_runner.go:130] > configmap/coredns replaced
	I0229 02:16:05.617363    8584 start.go:929] {"host.minikube.internal": 172.19.0.1} host record injected into CoreDNS's ConfigMap
	I0229 02:16:05.618519    8584 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 02:16:05.619544    8584 kapi.go:59] client config for multinode-314500: &rest.Config{Host:"https://172.19.2.165:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2480600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 02:16:05.620617    8584 node_ready.go:35] waiting up to 6m0s for node "multinode-314500" to be "Ready" ...
	I0229 02:16:05.620617    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:05.620617    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:05.620617    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:05.620617    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:05.625396    8584 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:16:05.625396    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:05.625396    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:05.625396    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:05.625396    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:05.625396    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:05.625396    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:05 GMT
	I0229 02:16:05.625396    8584 round_trippers.go:580]     Audit-Id: 410524b5-ba74-4eed-b6ad-c164114a2e45
	I0229 02:16:05.626569    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:06.130951    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:06.130951    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:06.130951    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:06.130951    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:06.134758    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:06.135746    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:06.135746    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:06.135746    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:06 GMT
	I0229 02:16:06.135746    8584 round_trippers.go:580]     Audit-Id: d3921daf-0cf7-4693-9c8c-01eed6add86d
	I0229 02:16:06.135746    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:06.135746    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:06.135871    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:06.136309    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:06.622511    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:06.622511    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:06.622511    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:06.622511    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:06.628940    8584 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 02:16:06.628940    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:06.628940    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:06.628940    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:06 GMT
	I0229 02:16:06.628940    8584 round_trippers.go:580]     Audit-Id: dde4c73f-476a-4c04-8fb3-4461985f3b72
	I0229 02:16:06.628940    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:06.628940    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:06.628940    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:06.630172    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:06.883598    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:16:06.883988    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:06.885333    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:16:06.885333    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:06.886306    8584 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:16:06.886086    8584 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 02:16:06.887008    8584 kapi.go:59] client config for multinode-314500: &rest.Config{Host:"https://172.19.2.165:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2480600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 02:16:06.887171    8584 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:16:06.887245    8584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 02:16:06.887293    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:16:06.888249    8584 addons.go:234] Setting addon default-storageclass=true in "multinode-314500"
	I0229 02:16:06.888325    8584 host.go:66] Checking if "multinode-314500" exists ...
	I0229 02:16:06.888997    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:16:07.129415    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:07.129415    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:07.129415    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:07.129415    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:07.137838    8584 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 02:16:07.137912    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:07.137912    8584 round_trippers.go:580]     Audit-Id: 399d2e3f-e8cf-4920-9750-05d41b929aad
	I0229 02:16:07.137912    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:07.137912    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:07.138018    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:07.138048    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:07.138048    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:07 GMT
	I0229 02:16:07.138329    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:07.622304    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:07.622304    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:07.622304    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:07.622304    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:07.633000    8584 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0229 02:16:07.633053    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:07.633053    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:07.633123    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:07.633123    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:07.633123    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:07.633123    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:07 GMT
	I0229 02:16:07.633123    8584 round_trippers.go:580]     Audit-Id: 6c87ad14-b146-42a7-ae05-253fa6399983
	I0229 02:16:07.633497    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:07.634314    8584 node_ready.go:58] node "multinode-314500" has status "Ready":"False"
	I0229 02:16:08.129012    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:08.129128    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:08.129128    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:08.129128    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:08.133061    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:08.133061    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:08.133061    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:08.133061    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:08.133061    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:08 GMT
	I0229 02:16:08.133061    8584 round_trippers.go:580]     Audit-Id: 4486154a-148b-4852-9398-d4ef707b126a
	I0229 02:16:08.133061    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:08.133061    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:08.133587    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:08.622112    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:08.622112    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:08.622112    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:08.622112    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:08.625110    8584 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:16:08.625110    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:08.625110    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:08 GMT
	I0229 02:16:08.625110    8584 round_trippers.go:580]     Audit-Id: 9fda18cb-76a8-4b72-85bc-268e5c5ee771
	I0229 02:16:08.625110    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:08.625110    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:08.625110    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:08.625110    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:08.626110    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:09.062859    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:16:09.062859    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:09.062859    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:16:09.128069    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:16:09.128069    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:09.128168    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:09.128168    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:09.128168    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:09.128168    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:09.128282    8584 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 02:16:09.128363    8584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 02:16:09.128396    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:16:09.132486    8584 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:16:09.132486    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:09.132486    8584 round_trippers.go:580]     Audit-Id: f086feb0-3bd9-4370-9635-53e735870f89
	I0229 02:16:09.132486    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:09.132486    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:09.132486    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:09.132486    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:09.132486    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:09 GMT
	I0229 02:16:09.133491    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:09.626134    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:09.626226    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:09.626226    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:09.626226    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:09.631701    8584 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 02:16:09.631701    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:09.631701    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:09.631701    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:09.631701    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:09.631701    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:09.631701    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:09 GMT
	I0229 02:16:09.631701    8584 round_trippers.go:580]     Audit-Id: a545aa49-b83a-4003-984f-45f9fe202d60
	I0229 02:16:09.631701    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:10.130946    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:10.130946    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:10.130946    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:10.130946    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:10.134969    8584 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:16:10.135394    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:10.135394    8584 round_trippers.go:580]     Audit-Id: 1724a3d5-9143-406a-bca9-05b66a0b2969
	I0229 02:16:10.135394    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:10.135394    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:10.135394    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:10.135394    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:10.135394    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:10 GMT
	I0229 02:16:10.135694    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:10.136156    8584 node_ready.go:58] node "multinode-314500" has status "Ready":"False"
	I0229 02:16:10.622330    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:10.622330    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:10.622420    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:10.622420    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:10.625946    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:10.625946    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:10.625946    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:10 GMT
	I0229 02:16:10.625946    8584 round_trippers.go:580]     Audit-Id: 7d5dc576-023c-4d62-8b5e-1f61e1eb4c92
	I0229 02:16:10.625946    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:10.625946    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:10.625946    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:10.625946    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:10.625946    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:11.130592    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:11.130592    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:11.130686    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:11.130686    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:11.133777    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:11.134244    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:11.134244    8584 round_trippers.go:580]     Audit-Id: 6c014d3d-aaf2-4324-a394-1f4ceda7527a
	I0229 02:16:11.134244    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:11.134244    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:11.134244    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:11.134244    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:11.134244    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:11 GMT
	I0229 02:16:11.134511    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:11.279789    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:16:11.280790    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:11.280889    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:16:11.611705    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:16:11.611705    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:11.613235    8584 sshutil.go:53] new ssh client: &{IP:172.19.2.165 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\id_rsa Username:docker}
	I0229 02:16:11.622115    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:11.622115    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:11.622115    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:11.622115    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:11.626134    8584 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:16:11.626583    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:11.626583    8584 round_trippers.go:580]     Audit-Id: 7ca4c11f-3d0b-4b6a-aeae-c8176d56d748
	I0229 02:16:11.626583    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:11.626583    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:11.626583    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:11.626583    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:11.626583    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:11 GMT
	I0229 02:16:11.626743    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:11.746983    8584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:16:12.129858    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:12.129858    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:12.129858    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:12.129858    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:12.134103    8584 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:16:12.134185    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:12.134185    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:12.134185    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:12.134185    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:12 GMT
	I0229 02:16:12.134185    8584 round_trippers.go:580]     Audit-Id: e300e49e-48d6-4796-b3e3-283ceb52ba8d
	I0229 02:16:12.134185    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:12.134185    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:12.134399    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:12.424764    8584 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0229 02:16:12.424842    8584 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0229 02:16:12.424922    8584 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0229 02:16:12.425012    8584 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0229 02:16:12.425012    8584 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0229 02:16:12.425069    8584 command_runner.go:130] > pod/storage-provisioner created
	I0229 02:16:12.621581    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:12.621581    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:12.621581    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:12.621581    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:12.625839    8584 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:16:12.625917    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:12.625980    8584 round_trippers.go:580]     Audit-Id: ab822128-f5fe-4739-8fe5-bd7b6f1890e7
	I0229 02:16:12.625980    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:12.625980    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:12.625980    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:12.625980    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:12.625980    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:12 GMT
	I0229 02:16:12.626299    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:12.626886    8584 node_ready.go:58] node "multinode-314500" has status "Ready":"False"
	I0229 02:16:13.130997    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:13.130997    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:13.130997    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:13.130997    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:13.137409    8584 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 02:16:13.137482    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:13.137482    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:13.137482    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:13.137482    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:13.137482    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:13.137482    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:13 GMT
	I0229 02:16:13.137482    8584 round_trippers.go:580]     Audit-Id: ba54e846-36f6-446a-839e-4e0e3c8dba08
	I0229 02:16:13.137692    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:13.621687    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:13.621687    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:13.621687    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:13.621687    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:13.624271    8584 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:16:13.625273    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:13.625273    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:13.625273    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:13.625273    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:13.625273    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:13.625273    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:13 GMT
	I0229 02:16:13.625273    8584 round_trippers.go:580]     Audit-Id: 87a66a52-80a0-45f3-8af7-9d492d7d293b
	I0229 02:16:13.625391    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:13.739754    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:16:13.739808    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:13.739808    8584 sshutil.go:53] new ssh client: &{IP:172.19.2.165 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\id_rsa Username:docker}
	I0229 02:16:13.872755    8584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 02:16:14.123275    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:14.123367    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:14.123367    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:14.123367    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:14.126646    8584 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:16:14.126646    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:14.126646    8584 round_trippers.go:580]     Audit-Id: 19134671-8c5f-4095-b846-f6fbd46bcd0b
	I0229 02:16:14.126646    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:14.126646    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:14.126646    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:14.126646    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:14.126747    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:14 GMT
	I0229 02:16:14.127021    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:14.135079    8584 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0229 02:16:14.135079    8584 round_trippers.go:463] GET https://172.19.2.165:8443/apis/storage.k8s.io/v1/storageclasses
	I0229 02:16:14.135079    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:14.135079    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:14.135605    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:14.138653    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:14.138653    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:14.138653    8584 round_trippers.go:580]     Audit-Id: 3a1a0ba3-f2e4-4d64-b6c4-3de42a6386a0
	I0229 02:16:14.138653    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:14.138653    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:14.138653    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:14.138653    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:14.138653    8584 round_trippers.go:580]     Content-Length: 1273
	I0229 02:16:14.138653    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:14 GMT
	I0229 02:16:14.138653    8584 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"413"},"items":[{"metadata":{"name":"standard","uid":"a7ad9511-65e8-4eef-89b4-7c1b803fc689","resourceVersion":"413","creationTimestamp":"2024-02-29T02:16:14Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-02-29T02:16:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0229 02:16:14.138653    8584 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"a7ad9511-65e8-4eef-89b4-7c1b803fc689","resourceVersion":"413","creationTimestamp":"2024-02-29T02:16:14Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-02-29T02:16:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0229 02:16:14.138653    8584 round_trippers.go:463] PUT https://172.19.2.165:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0229 02:16:14.138653    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:14.138653    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:14.138653    8584 round_trippers.go:473]     Content-Type: application/json
	I0229 02:16:14.138653    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:14.143659    8584 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 02:16:14.143659    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:14.143659    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:14.143659    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:14.143659    8584 round_trippers.go:580]     Content-Length: 1220
	I0229 02:16:14.143659    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:14 GMT
	I0229 02:16:14.143659    8584 round_trippers.go:580]     Audit-Id: 0eeb2b85-2218-4fa6-a0d6-7d8e8b89a118
	I0229 02:16:14.143659    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:14.143659    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:14.143659    8584 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"a7ad9511-65e8-4eef-89b4-7c1b803fc689","resourceVersion":"413","creationTimestamp":"2024-02-29T02:16:14Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-02-29T02:16:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0229 02:16:14.144910    8584 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0229 02:16:14.144910    8584 addons.go:505] enable addons completed in 9.4278877s: enabled=[storage-provisioner default-storageclass]
	I0229 02:16:14.631487    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:14.631603    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:14.631603    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:14.631603    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:14.635120    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:14.635120    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:14.635120    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:14.635120    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:14 GMT
	I0229 02:16:14.635120    8584 round_trippers.go:580]     Audit-Id: 448a4e05-de72-4089-adc3-a0cf52036b54
	I0229 02:16:14.635120    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:14.635120    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:14.635120    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:14.635840    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:14.636842    8584 node_ready.go:58] node "multinode-314500" has status "Ready":"False"
	I0229 02:16:15.134789    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:15.134789    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:15.134789    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:15.134789    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:15.138353    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:15.138353    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:15.138353    8584 round_trippers.go:580]     Audit-Id: 0c5724bc-14bf-4e22-8b28-2eed750f5e6b
	I0229 02:16:15.138353    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:15.138353    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:15.138353    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:15.138353    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:15.138353    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:15 GMT
	I0229 02:16:15.139035    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:15.636203    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:15.636203    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:15.636203    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:15.636203    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:15.639886    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:15.639886    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:15.639886    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:15 GMT
	I0229 02:16:15.639886    8584 round_trippers.go:580]     Audit-Id: b2f94694-f112-41b9-8bba-5b0a24ebff15
	I0229 02:16:15.639886    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:15.639886    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:15.639886    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:15.639886    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:15.640603    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:16.124483    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:16.124483    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:16.124483    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:16.124483    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:16.128036    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:16.128036    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:16.128036    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:16.128036    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:16.128036    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:16.128036    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:16 GMT
	I0229 02:16:16.128036    8584 round_trippers.go:580]     Audit-Id: 65ab147a-6009-41b7-8632-6cf748b1a929
	I0229 02:16:16.128036    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:16.128774    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:16.630690    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:16.630690    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:16.630690    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:16.630690    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:16.633754    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:16.634195    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:16.634195    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:16 GMT
	I0229 02:16:16.634195    8584 round_trippers.go:580]     Audit-Id: ba8279af-ce65-46db-a113-cfbea5d58aec
	I0229 02:16:16.634195    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:16.634195    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:16.634195    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:16.634247    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:16.634530    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"416","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I0229 02:16:16.635027    8584 node_ready.go:49] node "multinode-314500" has status "Ready":"True"
	I0229 02:16:16.635027    8584 node_ready.go:38] duration metric: took 11.013794s waiting for node "multinode-314500" to be "Ready" ...
	I0229 02:16:16.635027    8584 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:16:16.635027    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods
	I0229 02:16:16.635027    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:16.635027    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:16.635027    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:16.638680    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:16.638680    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:16.638680    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:16.638680    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:16.638680    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:16.638680    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:16 GMT
	I0229 02:16:16.638680    8584 round_trippers.go:580]     Audit-Id: a971a97c-8e2b-4fb0-abd4-182b3286afda
	I0229 02:16:16.638680    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:16.639968    8584 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"422"},"items":[{"metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"420","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53932 chars]
	I0229 02:16:16.644805    8584 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace to be "Ready" ...
	I0229 02:16:16.644983    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:16:16.644983    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:16.645026    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:16.645026    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:16.649483    8584 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:16:16.649525    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:16.649525    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:16.649525    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:16 GMT
	I0229 02:16:16.649525    8584 round_trippers.go:580]     Audit-Id: 598d01c3-5e69-4f62-935f-f65a0e597752
	I0229 02:16:16.649562    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:16.649618    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:16.649618    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:16.649618    8584 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"420","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0229 02:16:16.650559    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:16.650614    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:16.650614    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:16.650614    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:16.653509    8584 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:16:16.653509    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:16.653509    8584 round_trippers.go:580]     Audit-Id: d4632fc3-b104-4774-9f8d-ad65a9b99634
	I0229 02:16:16.653509    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:16.653509    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:16.653509    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:16.653509    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:16.653509    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:16 GMT
	I0229 02:16:16.653509    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"416","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I0229 02:16:17.153751    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:16:17.153915    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:17.153915    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:17.153915    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:17.157465    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:17.157656    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:17.157656    8584 round_trippers.go:580]     Audit-Id: 58765edd-d51c-4bd1-aba2-02e7a49d9565
	I0229 02:16:17.157656    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:17.157656    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:17.157656    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:17.157656    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:17.157656    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:17 GMT
	I0229 02:16:17.157656    8584 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"420","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0229 02:16:17.159074    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:17.159074    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:17.159198    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:17.159261    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:17.165635    8584 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 02:16:17.165635    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:17.165635    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:17.165635    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:17 GMT
	I0229 02:16:17.165635    8584 round_trippers.go:580]     Audit-Id: e2d12760-48b9-4e0d-bde2-ffc401c1ae39
	I0229 02:16:17.165635    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:17.165635    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:17.165635    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:17.166245    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"416","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I0229 02:16:17.646141    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:16:17.646196    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:17.646264    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:17.646264    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:17.649568    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:17.649568    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:17.649568    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:17.649568    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:17.649568    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:17.649568    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:17 GMT
	I0229 02:16:17.649568    8584 round_trippers.go:580]     Audit-Id: 643bfd9c-db53-4709-889d-f2c3b799b531
	I0229 02:16:17.649568    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:17.649568    8584 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"435","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6282 chars]
	I0229 02:16:17.650897    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:17.650897    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:17.650950    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:17.650950    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:17.653872    8584 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:16:17.653872    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:17.653872    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:17.653969    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:17 GMT
	I0229 02:16:17.654052    8584 round_trippers.go:580]     Audit-Id: bd9d3b81-4e48-4cd4-b61c-872a7afd1012
	I0229 02:16:17.654052    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:17.654052    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:17.654083    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:17.654372    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"416","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I0229 02:16:17.654824    8584 pod_ready.go:92] pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace has status "Ready":"True"
	I0229 02:16:17.654879    8584 pod_ready.go:81] duration metric: took 1.0099842s waiting for pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace to be "Ready" ...
	I0229 02:16:17.654879    8584 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:16:17.655009    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-314500
	I0229 02:16:17.655009    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:17.655009    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:17.655009    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:17.665273    8584 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0229 02:16:17.665273    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:17.665273    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:17.665273    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:17.665273    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:17 GMT
	I0229 02:16:17.665273    8584 round_trippers.go:580]     Audit-Id: 526c5f16-2a66-45ce-8632-d0f9fa5f6ba7
	I0229 02:16:17.665273    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:17.665273    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:17.667768    8584 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-314500","namespace":"kube-system","uid":"6fc42e7c-48f9-46df-bf2f-861e0684e37f","resourceVersion":"323","creationTimestamp":"2024-02-29T02:15:52Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.2.165:2379","kubernetes.io/config.hash":"0b84e88097a2b59a9c108b0f9fa2b889","kubernetes.io/config.mirror":"0b84e88097a2b59a9c108b0f9fa2b889","kubernetes.io/config.seen":"2024-02-29T02:15:52.221392786Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:15:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5852 chars]
	I0229 02:16:17.668271    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:17.668271    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:17.668271    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:17.668271    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:17.677864    8584 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0229 02:16:17.677864    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:17.677864    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:17.677864    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:17.677864    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:17.677864    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:17.677864    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:17 GMT
	I0229 02:16:17.677864    8584 round_trippers.go:580]     Audit-Id: 4d992db8-60ef-49b3-b2e9-0703ba54de12
	I0229 02:16:17.678938    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"416","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I0229 02:16:17.678938    8584 pod_ready.go:92] pod "etcd-multinode-314500" in "kube-system" namespace has status "Ready":"True"
	I0229 02:16:17.678938    8584 pod_ready.go:81] duration metric: took 24.0576ms waiting for pod "etcd-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:16:17.678938    8584 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:16:17.679572    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-314500
	I0229 02:16:17.679572    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:17.679622    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:17.679622    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:17.683833    8584 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:16:17.683833    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:17.683833    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:17.684456    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:17.684456    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:17 GMT
	I0229 02:16:17.684456    8584 round_trippers.go:580]     Audit-Id: 6f1b85da-922b-459d-a8dc-fb211d6b23dc
	I0229 02:16:17.684456    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:17.684456    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:17.684668    8584 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-314500","namespace":"kube-system","uid":"fc266082-ff2c-4bd1-951f-11dc765a28ae","resourceVersion":"303","creationTimestamp":"2024-02-29T02:15:52Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.2.165:8443","kubernetes.io/config.hash":"75abc10fab898952206cc1d682d3c922","kubernetes.io/config.mirror":"75abc10fab898952206cc1d682d3c922","kubernetes.io/config.seen":"2024-02-29T02:15:52.221397486Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:15:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7390 chars]
	I0229 02:16:17.685312    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:17.685312    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:17.685365    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:17.685365    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:17.690438    8584 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 02:16:17.690438    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:17.690438    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:17.690438    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:17.690438    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:17 GMT
	I0229 02:16:17.690438    8584 round_trippers.go:580]     Audit-Id: f815fd6b-646c-44c0-9468-208bff1f7a45
	I0229 02:16:17.690438    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:17.690438    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:17.690823    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"416","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I0229 02:16:17.691302    8584 pod_ready.go:92] pod "kube-apiserver-multinode-314500" in "kube-system" namespace has status "Ready":"True"
	I0229 02:16:17.691364    8584 pod_ready.go:81] duration metric: took 12.4254ms waiting for pod "kube-apiserver-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:16:17.691364    8584 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:16:17.691491    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-314500
	I0229 02:16:17.691491    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:17.691491    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:17.691491    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:17.693699    8584 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:16:17.694098    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:17.694098    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:17.694098    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:17 GMT
	I0229 02:16:17.694098    8584 round_trippers.go:580]     Audit-Id: bb9e2109-665e-49c3-ac65-cbc158c70f3e
	I0229 02:16:17.694098    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:17.694195    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:17.694195    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:17.694402    8584 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-314500","namespace":"kube-system","uid":"58e57902-e113-44a9-b5b5-4aba2ba13491","resourceVersion":"302","creationTimestamp":"2024-02-29T02:15:52Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"46f4a0cce9ca64e19c1ad09d6f30ce1e","kubernetes.io/config.mirror":"46f4a0cce9ca64e19c1ad09d6f30ce1e","kubernetes.io/config.seen":"2024-02-29T02:15:52.221398986Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:15:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6965 chars]
	I0229 02:16:17.695017    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:17.695067    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:17.695067    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:17.695067    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:17.698234    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:17.698281    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:17.698281    8584 round_trippers.go:580]     Audit-Id: 148ca40f-d5fb-49be-8b8a-09cc4e3afa18
	I0229 02:16:17.698281    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:17.698281    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:17.698339    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:17.698339    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:17.698388    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:17 GMT
	I0229 02:16:17.699249    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"416","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I0229 02:16:17.699313    8584 pod_ready.go:92] pod "kube-controller-manager-multinode-314500" in "kube-system" namespace has status "Ready":"True"
	I0229 02:16:17.699313    8584 pod_ready.go:81] duration metric: took 7.8948ms waiting for pod "kube-controller-manager-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:16:17.699313    8584 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6r6j4" in "kube-system" namespace to be "Ready" ...
	I0229 02:16:17.699313    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6r6j4
	I0229 02:16:17.699313    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:17.699313    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:17.699313    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:17.702891    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:17.703633    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:17.703633    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:17 GMT
	I0229 02:16:17.703633    8584 round_trippers.go:580]     Audit-Id: 7025cb07-a461-4530-bdd7-f2453b2a2350
	I0229 02:16:17.703633    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:17.703633    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:17.703633    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:17.703633    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:17.703905    8584 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6r6j4","generateName":"kube-proxy-","namespace":"kube-system","uid":"2b84b22d-3786-4f9e-a23a-c7cfc93bb671","resourceVersion":"394","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"99934fe5-0d72-4e83-8f59-4a0b59969008","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"99934fe5-0d72-4e83-8f59-4a0b59969008\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
	I0229 02:16:17.832880    8584 request.go:629] Waited for 126.8086ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:17.832880    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:17.832880    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:17.832880    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:17.832880    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:17.836917    8584 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:16:17.836917    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:17.836917    8584 round_trippers.go:580]     Audit-Id: 52268278-8be7-4449-a4bc-d534692682ee
	I0229 02:16:17.836917    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:17.836917    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:17.836917    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:17.836917    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:17.836917    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:18 GMT
	I0229 02:16:17.837455    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"416","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I0229 02:16:17.837896    8584 pod_ready.go:92] pod "kube-proxy-6r6j4" in "kube-system" namespace has status "Ready":"True"
	I0229 02:16:17.837896    8584 pod_ready.go:81] duration metric: took 138.5747ms waiting for pod "kube-proxy-6r6j4" in "kube-system" namespace to be "Ready" ...
	I0229 02:16:17.837896    8584 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:16:18.036948    8584 request.go:629] Waited for 198.7966ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-314500
	I0229 02:16:18.037077    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-314500
	I0229 02:16:18.037077    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:18.037077    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:18.037077    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:18.040666    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:18.040666    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:18.040666    8584 round_trippers.go:580]     Audit-Id: 97ba3e81-c240-4d8f-a9e6-117a64b5672c
	I0229 02:16:18.041515    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:18.041515    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:18.041515    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:18.041515    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:18.041515    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:18 GMT
	I0229 02:16:18.041693    8584 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-314500","namespace":"kube-system","uid":"31fcecc6-17de-43a6-892d-37cd915de64b","resourceVersion":"288","creationTimestamp":"2024-02-29T02:15:52Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"3d9a79ff068a0922524863a8caa5053a","kubernetes.io/config.mirror":"3d9a79ff068a0922524863a8caa5053a","kubernetes.io/config.seen":"2024-02-29T02:15:52.221399886Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:15:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4695 chars]
	I0229 02:16:18.240929    8584 request.go:629] Waited for 198.242ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:18.241375    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:18.241435    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:18.241435    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:18.241435    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:18.244752    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:18.245526    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:18.245526    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:18.245611    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:18 GMT
	I0229 02:16:18.245611    8584 round_trippers.go:580]     Audit-Id: 7935c4ce-ff7f-4b35-bff9-a77da52c6dda
	I0229 02:16:18.245611    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:18.245611    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:18.245611    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:18.245611    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"416","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I0229 02:16:18.246214    8584 pod_ready.go:92] pod "kube-scheduler-multinode-314500" in "kube-system" namespace has status "Ready":"True"
	I0229 02:16:18.246214    8584 pod_ready.go:81] duration metric: took 408.2266ms waiting for pod "kube-scheduler-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:16:18.246214    8584 pod_ready.go:38] duration metric: took 1.6110974s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:16:18.246214    8584 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:16:18.257038    8584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:18.283407    8584 command_runner.go:130] > 2018
	I0229 02:16:18.283407    8584 api_server.go:72] duration metric: took 12.9918453s to wait for apiserver process to appear ...
	I0229 02:16:18.283407    8584 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:16:18.283407    8584 api_server.go:253] Checking apiserver healthz at https://172.19.2.165:8443/healthz ...
	I0229 02:16:18.292685    8584 api_server.go:279] https://172.19.2.165:8443/healthz returned 200:
	ok
	I0229 02:16:18.293146    8584 round_trippers.go:463] GET https://172.19.2.165:8443/version
	I0229 02:16:18.293146    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:18.293146    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:18.293146    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:18.296745    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:18.296766    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:18.296766    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:18 GMT
	I0229 02:16:18.296844    8584 round_trippers.go:580]     Audit-Id: a3568257-7ba8-46aa-906e-199f937d3cb2
	I0229 02:16:18.296844    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:18.296844    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:18.296844    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:18.296844    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:18.296844    8584 round_trippers.go:580]     Content-Length: 264
	I0229 02:16:18.296933    8584 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0229 02:16:18.297126    8584 api_server.go:141] control plane version: v1.28.4
	I0229 02:16:18.297126    8584 api_server.go:131] duration metric: took 13.7187ms to wait for apiserver health ...
	I0229 02:16:18.297126    8584 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:16:18.441150    8584 request.go:629] Waited for 143.8801ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods
	I0229 02:16:18.441150    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods
	I0229 02:16:18.441150    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:18.441150    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:18.441150    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:18.446130    8584 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:16:18.446130    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:18.446130    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:18 GMT
	I0229 02:16:18.446130    8584 round_trippers.go:580]     Audit-Id: ad8e47f3-2e6e-4c08-9bc9-672b7124a085
	I0229 02:16:18.446130    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:18.446130    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:18.446130    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:18.446130    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:18.447912    8584 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"439"},"items":[{"metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"435","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54048 chars]
	I0229 02:16:18.450435    8584 system_pods.go:59] 8 kube-system pods found
	I0229 02:16:18.450435    8584 system_pods.go:61] "coredns-5dd5756b68-8g6tg" [ef7fb259-9f24-4645-9eff-2b16f6789e1b] Running
	I0229 02:16:18.450435    8584 system_pods.go:61] "etcd-multinode-314500" [6fc42e7c-48f9-46df-bf2f-861e0684e37f] Running
	I0229 02:16:18.450435    8584 system_pods.go:61] "kindnet-t9r77" [4620d417-744c-4049-82ab-79d1ee7f047c] Running
	I0229 02:16:18.450435    8584 system_pods.go:61] "kube-apiserver-multinode-314500" [fc266082-ff2c-4bd1-951f-11dc765a28ae] Running
	I0229 02:16:18.450435    8584 system_pods.go:61] "kube-controller-manager-multinode-314500" [58e57902-e113-44a9-b5b5-4aba2ba13491] Running
	I0229 02:16:18.450435    8584 system_pods.go:61] "kube-proxy-6r6j4" [2b84b22d-3786-4f9e-a23a-c7cfc93bb671] Running
	I0229 02:16:18.450435    8584 system_pods.go:61] "kube-scheduler-multinode-314500" [31fcecc6-17de-43a6-892d-37cd915de64b] Running
	I0229 02:16:18.450435    8584 system_pods.go:61] "storage-provisioner" [9780520b-8ff9-408a-ab6f-41b63790ccd1] Running
	I0229 02:16:18.450435    8584 system_pods.go:74] duration metric: took 153.3001ms to wait for pod list to return data ...
	I0229 02:16:18.450435    8584 default_sa.go:34] waiting for default service account to be created ...
	I0229 02:16:18.641470    8584 request.go:629] Waited for 191.0243ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.165:8443/api/v1/namespaces/default/serviceaccounts
	I0229 02:16:18.641470    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/default/serviceaccounts
	I0229 02:16:18.641470    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:18.641470    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:18.641470    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:18.645874    8584 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:16:18.645874    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:18.645874    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:18 GMT
	I0229 02:16:18.645874    8584 round_trippers.go:580]     Audit-Id: 9e04f5c6-c753-4db9-b22e-07bcf383223a
	I0229 02:16:18.646835    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:18.646835    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:18.646835    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:18.646835    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:18.646835    8584 round_trippers.go:580]     Content-Length: 261
	I0229 02:16:18.646835    8584 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"439"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"a442432a-e4e1-4889-bfa8-e3967acc17f0","resourceVersion":"330","creationTimestamp":"2024-02-29T02:16:04Z"}}]}
	I0229 02:16:18.646835    8584 default_sa.go:45] found service account: "default"
	I0229 02:16:18.646835    8584 default_sa.go:55] duration metric: took 196.3895ms for default service account to be created ...
	I0229 02:16:18.646835    8584 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 02:16:18.844094    8584 request.go:629] Waited for 197.2476ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods
	I0229 02:16:18.844094    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods
	I0229 02:16:18.844094    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:18.844094    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:18.844094    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:18.848446    8584 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:16:18.848446    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:18.848446    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:18.848446    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:18.848446    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:19 GMT
	I0229 02:16:18.848446    8584 round_trippers.go:580]     Audit-Id: 5e1f0b6a-e7f1-4363-96af-41558a1cff57
	I0229 02:16:18.848446    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:18.848446    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:18.850291    8584 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"439"},"items":[{"metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"435","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54048 chars]
	I0229 02:16:18.852542    8584 system_pods.go:86] 8 kube-system pods found
	I0229 02:16:18.852542    8584 system_pods.go:89] "coredns-5dd5756b68-8g6tg" [ef7fb259-9f24-4645-9eff-2b16f6789e1b] Running
	I0229 02:16:18.852542    8584 system_pods.go:89] "etcd-multinode-314500" [6fc42e7c-48f9-46df-bf2f-861e0684e37f] Running
	I0229 02:16:18.852542    8584 system_pods.go:89] "kindnet-t9r77" [4620d417-744c-4049-82ab-79d1ee7f047c] Running
	I0229 02:16:18.852542    8584 system_pods.go:89] "kube-apiserver-multinode-314500" [fc266082-ff2c-4bd1-951f-11dc765a28ae] Running
	I0229 02:16:18.852542    8584 system_pods.go:89] "kube-controller-manager-multinode-314500" [58e57902-e113-44a9-b5b5-4aba2ba13491] Running
	I0229 02:16:18.852542    8584 system_pods.go:89] "kube-proxy-6r6j4" [2b84b22d-3786-4f9e-a23a-c7cfc93bb671] Running
	I0229 02:16:18.852542    8584 system_pods.go:89] "kube-scheduler-multinode-314500" [31fcecc6-17de-43a6-892d-37cd915de64b] Running
	I0229 02:16:18.852542    8584 system_pods.go:89] "storage-provisioner" [9780520b-8ff9-408a-ab6f-41b63790ccd1] Running
	I0229 02:16:18.852542    8584 system_pods.go:126] duration metric: took 205.6953ms to wait for k8s-apps to be running ...
	I0229 02:16:18.852542    8584 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 02:16:18.861417    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:16:18.887054    8584 system_svc.go:56] duration metric: took 34.4312ms WaitForService to wait for kubelet.
	I0229 02:16:18.887149    8584 kubeadm.go:581] duration metric: took 13.5955543s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 02:16:18.887215    8584 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:16:19.031410    8584 request.go:629] Waited for 144.1874ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.165:8443/api/v1/nodes
	I0229 02:16:19.031606    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes
	I0229 02:16:19.031606    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:19.031606    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:19.031606    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:19.035104    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:19.035104    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:19.035104    8584 round_trippers.go:580]     Audit-Id: 3e53124b-3fb7-4d71-a89e-22e59922a676
	I0229 02:16:19.035104    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:19.035104    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:19.035507    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:19.035507    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:19.035507    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:19 GMT
	I0229 02:16:19.035795    8584 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"439"},"items":[{"metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"416","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4834 chars]
	I0229 02:16:19.036569    8584 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:16:19.036646    8584 node_conditions.go:123] node cpu capacity is 2
	I0229 02:16:19.036646    8584 node_conditions.go:105] duration metric: took 149.4233ms to run NodePressure ...
	I0229 02:16:19.036755    8584 start.go:228] waiting for startup goroutines ...
	I0229 02:16:19.036755    8584 start.go:233] waiting for cluster config update ...
	I0229 02:16:19.036755    8584 start.go:242] writing updated cluster config ...
	I0229 02:16:19.038683    8584 out.go:177] 
	I0229 02:16:19.055810    8584 config.go:182] Loaded profile config "multinode-314500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 02:16:19.055971    8584 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\config.json ...
	I0229 02:16:19.059124    8584 out.go:177] * Starting worker node multinode-314500-m02 in cluster multinode-314500
	I0229 02:16:19.059762    8584 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 02:16:19.059762    8584 cache.go:56] Caching tarball of preloaded images
	I0229 02:16:19.060125    8584 preload.go:174] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 02:16:19.060125    8584 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0229 02:16:19.060125    8584 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\config.json ...
	I0229 02:16:19.069726    8584 start.go:365] acquiring machines lock for multinode-314500-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 02:16:19.070853    8584 start.go:369] acquired machines lock for "multinode-314500-m02" in 145.1µs
	I0229 02:16:19.071032    8584 start.go:93] Provisioning new machine with config: &{Name:multinode-314500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-314500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.19.2.165 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequ
ested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0229 02:16:19.071032    8584 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0229 02:16:19.071291    8584 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0229 02:16:19.071291    8584 start.go:159] libmachine.API.Create for "multinode-314500" (driver="hyperv")
	I0229 02:16:19.071291    8584 client.go:168] LocalClient.Create starting
	I0229 02:16:19.072518    8584 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0229 02:16:19.072841    8584 main.go:141] libmachine: Decoding PEM data...
	I0229 02:16:19.072841    8584 main.go:141] libmachine: Parsing certificate...
	I0229 02:16:19.073047    8584 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0229 02:16:19.073192    8584 main.go:141] libmachine: Decoding PEM data...
	I0229 02:16:19.073192    8584 main.go:141] libmachine: Parsing certificate...
	I0229 02:16:19.073192    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0229 02:16:20.920705    8584 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0229 02:16:20.920705    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:20.921317    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0229 02:16:22.576054    8584 main.go:141] libmachine: [stdout =====>] : False
	
	I0229 02:16:22.576118    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:22.576186    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0229 02:16:24.018073    8584 main.go:141] libmachine: [stdout =====>] : True
	
	I0229 02:16:24.018073    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:24.018073    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0229 02:16:27.519984    8584 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0229 02:16:27.521004    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:27.522825    8584 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0229 02:16:27.901527    8584 main.go:141] libmachine: Creating SSH key...
	I0229 02:16:28.097501    8584 main.go:141] libmachine: Creating VM...
	I0229 02:16:28.097501    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0229 02:16:30.904965    8584 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0229 02:16:30.905182    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:30.905182    8584 main.go:141] libmachine: Using switch "Default Switch"
	I0229 02:16:30.905182    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0229 02:16:32.604830    8584 main.go:141] libmachine: [stdout =====>] : True
	
	I0229 02:16:32.604830    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:32.604830    8584 main.go:141] libmachine: Creating VHD
	I0229 02:16:32.604937    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0229 02:16:36.234786    8584 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 68DA3A88-B6E1-46DA-93D1-804B8B5EA2B6
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0229 02:16:36.234786    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:36.234786    8584 main.go:141] libmachine: Writing magic tar header
	I0229 02:16:36.235274    8584 main.go:141] libmachine: Writing SSH key tar header
	I0229 02:16:36.244776    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0229 02:16:39.318116    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:16:39.318116    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:39.318116    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m02\disk.vhd' -SizeBytes 20000MB
	I0229 02:16:41.733381    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:16:41.733986    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:41.734091    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-314500-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0229 02:16:45.142995    8584 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-314500-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0229 02:16:45.142995    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:45.143938    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-314500-m02 -DynamicMemoryEnabled $false
	I0229 02:16:47.265484    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:16:47.265484    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:47.265616    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-314500-m02 -Count 2
	I0229 02:16:49.321416    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:16:49.321772    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:49.321890    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-314500-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m02\boot2docker.iso'
	I0229 02:16:51.771609    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:16:51.771808    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:51.771808    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-314500-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m02\disk.vhd'
	I0229 02:16:54.237843    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:16:54.238288    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:54.238288    8584 main.go:141] libmachine: Starting VM...
	I0229 02:16:54.238364    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-314500-m02
	I0229 02:16:56.948503    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:16:56.948564    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:56.948564    8584 main.go:141] libmachine: Waiting for host to start...
	I0229 02:16:56.948691    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:16:59.081484    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:16:59.081484    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:59.081484    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:17:01.451137    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:17:01.451137    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:02.451735    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:17:04.482600    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:17:04.482600    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:04.482600    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:17:06.855829    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:17:06.855829    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:07.863335    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:17:09.971502    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:17:09.971502    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:09.971663    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:17:12.324229    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:17:12.324333    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:13.330922    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:17:15.391366    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:17:15.391404    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:15.391404    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:17:17.718844    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:17:17.718973    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:18.726464    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:17:20.785794    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:17:20.785794    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:20.785794    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:17:23.184930    8584 main.go:141] libmachine: [stdout =====>] : 172.19.5.202
	
	I0229 02:17:23.184930    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:23.185003    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:17:25.185603    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:17:25.185847    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:25.185847    8584 machine.go:88] provisioning docker machine ...
	I0229 02:17:25.185847    8584 buildroot.go:166] provisioning hostname "multinode-314500-m02"
	I0229 02:17:25.185847    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:17:27.225297    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:17:27.226441    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:27.226473    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:17:29.607904    8584 main.go:141] libmachine: [stdout =====>] : 172.19.5.202
	
	I0229 02:17:29.607904    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:29.612460    8584 main.go:141] libmachine: Using SSH client type: native
	I0229 02:17:29.622734    8584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.5.202 22 <nil> <nil>}
	I0229 02:17:29.622734    8584 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-314500-m02 && echo "multinode-314500-m02" | sudo tee /etc/hostname
	I0229 02:17:29.783303    8584 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-314500-m02
	
	I0229 02:17:29.783303    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:17:31.813172    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:17:31.813290    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:31.813290    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:17:34.232804    8584 main.go:141] libmachine: [stdout =====>] : 172.19.5.202
	
	I0229 02:17:34.233345    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:34.237405    8584 main.go:141] libmachine: Using SSH client type: native
	I0229 02:17:34.237468    8584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.5.202 22 <nil> <nil>}
	I0229 02:17:34.237468    8584 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-314500-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-314500-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-314500-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:17:34.392771    8584 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:17:34.392771    8584 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0229 02:17:34.392853    8584 buildroot.go:174] setting up certificates
	I0229 02:17:34.392853    8584 provision.go:83] configureAuth start
	I0229 02:17:34.392853    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:17:36.409714    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:17:36.409714    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:36.409926    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:17:38.862723    8584 main.go:141] libmachine: [stdout =====>] : 172.19.5.202
	
	I0229 02:17:38.862870    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:38.862870    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:17:40.858876    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:17:40.859201    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:40.859201    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:17:43.234342    8584 main.go:141] libmachine: [stdout =====>] : 172.19.5.202
	
	I0229 02:17:43.234419    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:43.234419    8584 provision.go:138] copyHostCerts
	I0229 02:17:43.234567    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0229 02:17:43.234765    8584 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0229 02:17:43.234765    8584 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0229 02:17:43.235285    8584 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0229 02:17:43.236034    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0229 02:17:43.236034    8584 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0229 02:17:43.236034    8584 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0229 02:17:43.236034    8584 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1675 bytes)
	I0229 02:17:43.236807    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0229 02:17:43.237396    8584 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0229 02:17:43.237396    8584 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0229 02:17:43.237497    8584 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0229 02:17:43.238127    8584 provision.go:112] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-314500-m02 san=[172.19.5.202 172.19.5.202 localhost 127.0.0.1 minikube multinode-314500-m02]
	I0229 02:17:43.524218    8584 provision.go:172] copyRemoteCerts
	I0229 02:17:43.533207    8584 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:17:43.533207    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:17:45.530673    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:17:45.530673    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:45.530747    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:17:47.941248    8584 main.go:141] libmachine: [stdout =====>] : 172.19.5.202
	
	I0229 02:17:47.941248    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:47.942211    8584 sshutil.go:53] new ssh client: &{IP:172.19.5.202 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m02\id_rsa Username:docker}
	I0229 02:17:48.060802    8584 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5273422s)
	I0229 02:17:48.060802    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0229 02:17:48.061398    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 02:17:48.106726    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0229 02:17:48.107259    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1237 bytes)
	I0229 02:17:48.151608    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0229 02:17:48.152143    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 02:17:48.200186    8584 provision.go:86] duration metric: configureAuth took 13.8065619s
	I0229 02:17:48.200186    8584 buildroot.go:189] setting minikube options for container-runtime
	I0229 02:17:48.200842    8584 config.go:182] Loaded profile config "multinode-314500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 02:17:48.200920    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:17:50.211498    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:17:50.211498    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:50.211498    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:17:52.592758    8584 main.go:141] libmachine: [stdout =====>] : 172.19.5.202
	
	I0229 02:17:52.592758    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:52.597792    8584 main.go:141] libmachine: Using SSH client type: native
	I0229 02:17:52.598309    8584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.5.202 22 <nil> <nil>}
	I0229 02:17:52.598381    8584 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 02:17:52.757991    8584 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0229 02:17:52.757991    8584 buildroot.go:70] root file system type: tmpfs
	I0229 02:17:52.757991    8584 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 02:17:52.758523    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:17:54.794561    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:17:54.794987    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:54.795068    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:17:57.208524    8584 main.go:141] libmachine: [stdout =====>] : 172.19.5.202
	
	I0229 02:17:57.208524    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:57.212707    8584 main.go:141] libmachine: Using SSH client type: native
	I0229 02:17:57.213061    8584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.5.202 22 <nil> <nil>}
	I0229 02:17:57.213061    8584 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.2.165"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 02:17:57.378362    8584 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.2.165
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 02:17:57.378395    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:17:59.428307    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:17:59.428307    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:59.428307    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:18:01.824823    8584 main.go:141] libmachine: [stdout =====>] : 172.19.5.202
	
	I0229 02:18:01.824823    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:18:01.828335    8584 main.go:141] libmachine: Using SSH client type: native
	I0229 02:18:01.828927    8584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.5.202 22 <nil> <nil>}
	I0229 02:18:01.828927    8584 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 02:18:02.863847    8584 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0229 02:18:02.863847    8584 machine.go:91] provisioned docker machine in 37.6758983s
	I0229 02:18:02.863847    8584 client.go:171] LocalClient.Create took 1m43.7867595s
	I0229 02:18:02.864958    8584 start.go:167] duration metric: libmachine.API.Create for "multinode-314500" took 1m43.78787s
	I0229 02:18:02.864958    8584 start.go:300] post-start starting for "multinode-314500-m02" (driver="hyperv")
	I0229 02:18:02.864958    8584 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:18:02.874256    8584 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:18:02.874256    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:18:04.910564    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:18:04.910633    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:18:04.910703    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:18:07.378336    8584 main.go:141] libmachine: [stdout =====>] : 172.19.5.202
	
	I0229 02:18:07.378336    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:18:07.378487    8584 sshutil.go:53] new ssh client: &{IP:172.19.5.202 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m02\id_rsa Username:docker}
	I0229 02:18:07.486010    8584 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6114972s)
	I0229 02:18:07.496984    8584 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:18:07.504935    8584 command_runner.go:130] > NAME=Buildroot
	I0229 02:18:07.504935    8584 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0229 02:18:07.504935    8584 command_runner.go:130] > ID=buildroot
	I0229 02:18:07.504935    8584 command_runner.go:130] > VERSION_ID=2023.02.9
	I0229 02:18:07.504935    8584 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0229 02:18:07.505148    8584 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 02:18:07.505148    8584 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0229 02:18:07.505545    8584 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0229 02:18:07.508348    8584 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem -> 33122.pem in /etc/ssl/certs
	I0229 02:18:07.508348    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem -> /etc/ssl/certs/33122.pem
	I0229 02:18:07.517641    8584 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 02:18:07.536722    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem --> /etc/ssl/certs/33122.pem (1708 bytes)
	I0229 02:18:07.582613    8584 start.go:303] post-start completed in 4.7173917s
	I0229 02:18:07.584757    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:18:09.616749    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:18:09.616749    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:18:09.617616    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:18:12.029126    8584 main.go:141] libmachine: [stdout =====>] : 172.19.5.202
	
	I0229 02:18:12.029126    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:18:12.029537    8584 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\config.json ...
	I0229 02:18:12.031412    8584 start.go:128] duration metric: createHost completed in 1m52.9539719s
	I0229 02:18:12.031412    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:18:14.046188    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:18:14.046538    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:18:14.046589    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:18:16.455401    8584 main.go:141] libmachine: [stdout =====>] : 172.19.5.202
	
	I0229 02:18:16.455976    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:18:16.461299    8584 main.go:141] libmachine: Using SSH client type: native
	I0229 02:18:16.461877    8584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.5.202 22 <nil> <nil>}
	I0229 02:18:16.461877    8584 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 02:18:16.593240    8584 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709173096.763630370
	
	I0229 02:18:16.593344    8584 fix.go:206] guest clock: 1709173096.763630370
	I0229 02:18:16.593344    8584 fix.go:219] Guest: 2024-02-29 02:18:16.76363037 +0000 UTC Remote: 2024-02-29 02:18:12.0314125 +0000 UTC m=+312.004845001 (delta=4.73221787s)
	I0229 02:18:16.593455    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:18:18.589352    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:18:18.589352    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:18:18.589352    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:18:21.027873    8584 main.go:141] libmachine: [stdout =====>] : 172.19.5.202
	
	I0229 02:18:21.027947    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:18:21.033045    8584 main.go:141] libmachine: Using SSH client type: native
	I0229 02:18:21.033045    8584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.5.202 22 <nil> <nil>}
	I0229 02:18:21.033569    8584 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709173096
	I0229 02:18:21.167765    8584 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Feb 29 02:18:16 UTC 2024
	
	I0229 02:18:21.167765    8584 fix.go:226] clock set: Thu Feb 29 02:18:16 UTC 2024
	 (err=<nil>)
	I0229 02:18:21.167765    8584 start.go:83] releasing machines lock for "multinode-314500-m02", held for 2m2.0900438s
	I0229 02:18:21.167765    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:18:23.153744    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:18:23.153744    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:18:23.153744    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:18:25.578574    8584 main.go:141] libmachine: [stdout =====>] : 172.19.5.202
	
	I0229 02:18:25.578574    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:18:25.578800    8584 out.go:177] * Found network options:
	I0229 02:18:25.580065    8584 out.go:177]   - NO_PROXY=172.19.2.165
	W0229 02:18:25.580612    8584 proxy.go:119] fail to check proxy env: Error ip not in block
	I0229 02:18:25.580835    8584 out.go:177]   - NO_PROXY=172.19.2.165
	W0229 02:18:25.581420    8584 proxy.go:119] fail to check proxy env: Error ip not in block
	W0229 02:18:25.583050    8584 proxy.go:119] fail to check proxy env: Error ip not in block
	I0229 02:18:25.585206    8584 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:18:25.585373    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:18:25.593744    8584 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0229 02:18:25.594079    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:18:27.674183    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:18:27.674183    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:18:27.674183    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:18:27.675185    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:18:27.675185    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:18:27.675185    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:18:30.173701    8584 main.go:141] libmachine: [stdout =====>] : 172.19.5.202
	
	I0229 02:18:30.174284    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:18:30.174503    8584 sshutil.go:53] new ssh client: &{IP:172.19.5.202 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m02\id_rsa Username:docker}
	I0229 02:18:30.199190    8584 main.go:141] libmachine: [stdout =====>] : 172.19.5.202
	
	I0229 02:18:30.199190    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:18:30.199500    8584 sshutil.go:53] new ssh client: &{IP:172.19.5.202 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m02\id_rsa Username:docker}
	I0229 02:18:30.277565    8584 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0229 02:18:30.278069    8584 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.6840656s)
	W0229 02:18:30.278069    8584 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 02:18:30.290955    8584 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:18:30.389381    8584 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0229 02:18:30.389381    8584 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0229 02:18:30.389381    8584 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.8038229s)
	I0229 02:18:30.389381    8584 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 02:18:30.389381    8584 start.go:475] detecting cgroup driver to use...
	I0229 02:18:30.389381    8584 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:18:30.425450    8584 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0229 02:18:30.436466    8584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0229 02:18:30.467218    8584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 02:18:30.486122    8584 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 02:18:30.494627    8584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 02:18:30.522647    8584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 02:18:30.553444    8584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 02:18:30.581124    8584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 02:18:30.616953    8584 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:18:30.644924    8584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 02:18:30.674292    8584 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:18:30.691155    8584 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0229 02:18:30.703168    8584 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:18:30.731843    8584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:18:30.943189    8584 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 02:18:30.974201    8584 start.go:475] detecting cgroup driver to use...
	I0229 02:18:30.984195    8584 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 02:18:31.010398    8584 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0229 02:18:31.010398    8584 command_runner.go:130] > [Unit]
	I0229 02:18:31.010398    8584 command_runner.go:130] > Description=Docker Application Container Engine
	I0229 02:18:31.010398    8584 command_runner.go:130] > Documentation=https://docs.docker.com
	I0229 02:18:31.010398    8584 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0229 02:18:31.010398    8584 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0229 02:18:31.010398    8584 command_runner.go:130] > StartLimitBurst=3
	I0229 02:18:31.010398    8584 command_runner.go:130] > StartLimitIntervalSec=60
	I0229 02:18:31.010398    8584 command_runner.go:130] > [Service]
	I0229 02:18:31.010398    8584 command_runner.go:130] > Type=notify
	I0229 02:18:31.010398    8584 command_runner.go:130] > Restart=on-failure
	I0229 02:18:31.010398    8584 command_runner.go:130] > Environment=NO_PROXY=172.19.2.165
	I0229 02:18:31.010398    8584 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0229 02:18:31.010398    8584 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0229 02:18:31.010398    8584 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0229 02:18:31.010931    8584 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0229 02:18:31.010981    8584 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0229 02:18:31.011019    8584 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0229 02:18:31.011019    8584 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0229 02:18:31.011082    8584 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0229 02:18:31.011138    8584 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0229 02:18:31.011138    8584 command_runner.go:130] > ExecStart=
	I0229 02:18:31.011197    8584 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0229 02:18:31.011243    8584 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0229 02:18:31.011243    8584 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0229 02:18:31.011315    8584 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0229 02:18:31.011359    8584 command_runner.go:130] > LimitNOFILE=infinity
	I0229 02:18:31.011359    8584 command_runner.go:130] > LimitNPROC=infinity
	I0229 02:18:31.011359    8584 command_runner.go:130] > LimitCORE=infinity
	I0229 02:18:31.011425    8584 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0229 02:18:31.011425    8584 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0229 02:18:31.011425    8584 command_runner.go:130] > TasksMax=infinity
	I0229 02:18:31.011495    8584 command_runner.go:130] > TimeoutStartSec=0
	I0229 02:18:31.011495    8584 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0229 02:18:31.011495    8584 command_runner.go:130] > Delegate=yes
	I0229 02:18:31.011557    8584 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0229 02:18:31.011557    8584 command_runner.go:130] > KillMode=process
	I0229 02:18:31.011557    8584 command_runner.go:130] > [Install]
	I0229 02:18:31.011626    8584 command_runner.go:130] > WantedBy=multi-user.target
	I0229 02:18:31.022514    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:18:31.053734    8584 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 02:18:31.093320    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:18:31.125810    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 02:18:31.159106    8584 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 02:18:31.209007    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 02:18:31.236274    8584 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:18:31.271193    8584 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0229 02:18:31.283174    8584 ssh_runner.go:195] Run: which cri-dockerd
	I0229 02:18:31.290285    8584 command_runner.go:130] > /usr/bin/cri-dockerd
	I0229 02:18:31.300670    8584 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 02:18:31.320930    8584 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0229 02:18:31.363898    8584 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 02:18:31.567044    8584 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 02:18:31.755853    8584 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 02:18:31.755981    8584 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 02:18:31.800154    8584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:18:32.002260    8584 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 02:18:33.510987    8584 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5086429s)
	I0229 02:18:33.521617    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0229 02:18:33.555076    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 02:18:33.593354    8584 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0229 02:18:33.787890    8584 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0229 02:18:34.002397    8584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:18:34.193768    8584 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0229 02:18:34.233767    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 02:18:34.268183    8584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:18:34.461138    8584 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0229 02:18:34.565934    8584 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0229 02:18:34.575816    8584 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0229 02:18:34.586219    8584 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0229 02:18:34.586284    8584 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0229 02:18:34.586284    8584 command_runner.go:130] > Device: 0,22	Inode: 891         Links: 1
	I0229 02:18:34.586284    8584 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0229 02:18:34.586284    8584 command_runner.go:130] > Access: 2024-02-29 02:18:34.658282101 +0000
	I0229 02:18:34.586284    8584 command_runner.go:130] > Modify: 2024-02-29 02:18:34.658282101 +0000
	I0229 02:18:34.586284    8584 command_runner.go:130] > Change: 2024-02-29 02:18:34.662282244 +0000
	I0229 02:18:34.586356    8584 command_runner.go:130] >  Birth: -
	I0229 02:18:34.586415    8584 start.go:543] Will wait 60s for crictl version
	I0229 02:18:34.594891    8584 ssh_runner.go:195] Run: which crictl
	I0229 02:18:34.600806    8584 command_runner.go:130] > /usr/bin/crictl
	I0229 02:18:34.613152    8584 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 02:18:34.683047    8584 command_runner.go:130] > Version:  0.1.0
	I0229 02:18:34.683047    8584 command_runner.go:130] > RuntimeName:  docker
	I0229 02:18:34.683047    8584 command_runner.go:130] > RuntimeVersion:  24.0.7
	I0229 02:18:34.683047    8584 command_runner.go:130] > RuntimeApiVersion:  v1
	I0229 02:18:34.683047    8584 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0229 02:18:34.690707    8584 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 02:18:34.727739    8584 command_runner.go:130] > 24.0.7
	I0229 02:18:34.736706    8584 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 02:18:34.772261    8584 command_runner.go:130] > 24.0.7
	I0229 02:18:34.773681    8584 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0229 02:18:34.774281    8584 out.go:177]   - env NO_PROXY=172.19.2.165
	I0229 02:18:34.775285    8584 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0229 02:18:34.778553    8584 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0229 02:18:34.779106    8584 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0229 02:18:34.779106    8584 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0229 02:18:34.779106    8584 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:a6:a3:c1 Flags:up|broadcast|multicast|running}
	I0229 02:18:34.782065    8584 ip.go:210] interface addr: fe80::fc78:4865:5cac:d448/64
	I0229 02:18:34.782065    8584 ip.go:210] interface addr: 172.19.0.1/20
	I0229 02:18:34.790491    8584 ssh_runner.go:195] Run: grep 172.19.0.1	host.minikube.internal$ /etc/hosts
	I0229 02:18:34.796849    8584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:18:34.818492    8584 certs.go:56] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500 for IP: 172.19.5.202
	I0229 02:18:34.818492    8584 certs.go:190] acquiring lock for shared ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:18:34.818492    8584 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0229 02:18:34.818492    8584 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0229 02:18:34.819491    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0229 02:18:34.819491    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0229 02:18:34.819491    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0229 02:18:34.819491    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0229 02:18:34.819491    8584 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3312.pem (1338 bytes)
	W0229 02:18:34.820491    8584 certs.go:433] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3312_empty.pem, impossibly tiny 0 bytes
	I0229 02:18:34.820491    8584 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0229 02:18:34.820491    8584 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0229 02:18:34.820491    8584 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0229 02:18:34.820491    8584 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0229 02:18:34.821487    8584 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem (1708 bytes)
	I0229 02:18:34.821487    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:18:34.821487    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3312.pem -> /usr/share/ca-certificates/3312.pem
	I0229 02:18:34.821487    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem -> /usr/share/ca-certificates/33122.pem
	I0229 02:18:34.822487    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 02:18:34.868245    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 02:18:34.918714    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 02:18:34.967307    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 02:18:35.017796    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 02:18:35.066669    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3312.pem --> /usr/share/ca-certificates/3312.pem (1338 bytes)
	I0229 02:18:35.114276    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem --> /usr/share/ca-certificates/33122.pem (1708 bytes)
	I0229 02:18:35.168006    8584 ssh_runner.go:195] Run: openssl version
	I0229 02:18:35.176800    8584 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0229 02:18:35.185691    8584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 02:18:35.215735    8584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:18:35.222256    8584 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 29 00:45 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:18:35.222256    8584 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 00:45 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:18:35.230885    8584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:18:35.240332    8584 command_runner.go:130] > b5213941
	I0229 02:18:35.249159    8584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 02:18:35.281031    8584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3312.pem && ln -fs /usr/share/ca-certificates/3312.pem /etc/ssl/certs/3312.pem"
	I0229 02:18:35.309172    8584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3312.pem
	I0229 02:18:35.315998    8584 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 29 00:59 /usr/share/ca-certificates/3312.pem
	I0229 02:18:35.315998    8584 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 00:59 /usr/share/ca-certificates/3312.pem
	I0229 02:18:35.326720    8584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3312.pem
	I0229 02:18:35.335106    8584 command_runner.go:130] > 51391683
	I0229 02:18:35.344025    8584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3312.pem /etc/ssl/certs/51391683.0"
	I0229 02:18:35.372591    8584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/33122.pem && ln -fs /usr/share/ca-certificates/33122.pem /etc/ssl/certs/33122.pem"
	I0229 02:18:35.406771    8584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/33122.pem
	I0229 02:18:35.415262    8584 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 29 00:59 /usr/share/ca-certificates/33122.pem
	I0229 02:18:35.415680    8584 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 00:59 /usr/share/ca-certificates/33122.pem
	I0229 02:18:35.425523    8584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/33122.pem
	I0229 02:18:35.433811    8584 command_runner.go:130] > 3ec20f2e
	I0229 02:18:35.445146    8584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/33122.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 02:18:35.475114    8584 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 02:18:35.481743    8584 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 02:18:35.482501    8584 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 02:18:35.489621    8584 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0229 02:18:35.524210    8584 command_runner.go:130] > cgroupfs
	I0229 02:18:35.524318    8584 cni.go:84] Creating CNI manager for ""
	I0229 02:18:35.524318    8584 cni.go:136] 2 nodes found, recommending kindnet
	I0229 02:18:35.524318    8584 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 02:18:35.524429    8584 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.5.202 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-314500 NodeName:multinode-314500-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.2.165"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.5.202 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 02:18:35.524626    8584 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.5.202
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-314500-m02"
	  kubeletExtraArgs:
	    node-ip: 172.19.5.202
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.2.165"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 02:18:35.524738    8584 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-314500-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.5.202
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-314500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 02:18:35.533460    8584 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 02:18:35.552711    8584 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	I0229 02:18:35.552711    8584 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0229 02:18:35.561470    8584 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0229 02:18:35.584271    8584 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet
	I0229 02:18:35.584271    8584 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm
	I0229 02:18:35.584271    8584 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl
	I0229 02:18:36.998042    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0229 02:18:37.009077    8584 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0229 02:18:37.017133    8584 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0229 02:18:37.017341    8584 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0229 02:18:37.017341    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0229 02:18:40.084939    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0229 02:18:40.095940    8584 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0229 02:18:40.104473    8584 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0229 02:18:40.104473    8584 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0229 02:18:40.104473    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0229 02:18:45.263699    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:18:45.287939    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0229 02:18:45.299336    8584 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0229 02:18:45.305390    8584 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0229 02:18:45.305390    8584 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0229 02:18:45.305390    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0229 02:18:45.925172    8584 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0229 02:18:45.944660    8584 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I0229 02:18:45.978335    8584 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 02:18:46.017572    8584 ssh_runner.go:195] Run: grep 172.19.2.165	control-plane.minikube.internal$ /etc/hosts
	I0229 02:18:46.024303    8584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.2.165	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:18:46.045317    8584 host.go:66] Checking if "multinode-314500" exists ...
	I0229 02:18:46.045993    8584 config.go:182] Loaded profile config "multinode-314500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 02:18:46.045993    8584 start.go:304] JoinCluster: &{Name:multinode-314500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-314500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.19.2.165 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.5.202 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:18:46.046193    8584 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0229 02:18:46.046251    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:18:48.030615    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:18:48.030615    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:18:48.030726    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:18:50.433720    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:18:50.434239    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:18:50.434239    8584 sshutil.go:53] new ssh client: &{IP:172.19.2.165 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\id_rsa Username:docker}
	I0229 02:18:50.638259    8584 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token o9oq2m.h2bk0u2kuwdvt40c --discovery-token-ca-cert-hash sha256:9c722bf1323b6c4442b9327af3863f0d7e41785d89e27c3b473d4929b028e022 
	I0229 02:18:50.638259    8584 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (4.5918101s)
	I0229 02:18:50.638259    8584 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.19.5.202 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0229 02:18:50.638259    8584 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token o9oq2m.h2bk0u2kuwdvt40c --discovery-token-ca-cert-hash sha256:9c722bf1323b6c4442b9327af3863f0d7e41785d89e27c3b473d4929b028e022 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-314500-m02"
	I0229 02:18:50.699991    8584 command_runner.go:130] ! W0229 02:18:50.872733    1324 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0229 02:18:50.889853    8584 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:18:53.684561    8584 command_runner.go:130] > [preflight] Running pre-flight checks
	I0229 02:18:53.684561    8584 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0229 02:18:53.684561    8584 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0229 02:18:53.684561    8584 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:18:53.684561    8584 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:18:53.684561    8584 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0229 02:18:53.684715    8584 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0229 02:18:53.684715    8584 command_runner.go:130] > This node has joined the cluster:
	I0229 02:18:53.684715    8584 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0229 02:18:53.684715    8584 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0229 02:18:53.684715    8584 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0229 02:18:53.684802    8584 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token o9oq2m.h2bk0u2kuwdvt40c --discovery-token-ca-cert-hash sha256:9c722bf1323b6c4442b9327af3863f0d7e41785d89e27c3b473d4929b028e022 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-314500-m02": (3.0463738s)
	I0229 02:18:53.684802    8584 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0229 02:18:53.931915    8584 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0229 02:18:54.149000    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61 minikube.k8s.io/name=multinode-314500 minikube.k8s.io/updated_at=2024_02_29T02_18_54_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:54.276936    8584 command_runner.go:130] > node/multinode-314500-m02 labeled
	I0229 02:18:54.276936    8584 start.go:306] JoinCluster complete in 8.2304841s
	I0229 02:18:54.277943    8584 cni.go:84] Creating CNI manager for ""
	I0229 02:18:54.277943    8584 cni.go:136] 2 nodes found, recommending kindnet
	I0229 02:18:54.287322    8584 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0229 02:18:54.295314    8584 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0229 02:18:54.295314    8584 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0229 02:18:54.295314    8584 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0229 02:18:54.295430    8584 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0229 02:18:54.295430    8584 command_runner.go:130] > Access: 2024-02-29 02:14:07.987005400 +0000
	I0229 02:18:54.295430    8584 command_runner.go:130] > Modify: 2024-02-23 03:39:37.000000000 +0000
	I0229 02:18:54.295430    8584 command_runner.go:130] > Change: 2024-02-29 02:13:59.368000000 +0000
	I0229 02:18:54.295430    8584 command_runner.go:130] >  Birth: -
	I0229 02:18:54.295529    8584 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0229 02:18:54.295574    8584 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0229 02:18:54.339530    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0229 02:18:54.828066    8584 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0229 02:18:54.828174    8584 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0229 02:18:54.828174    8584 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0229 02:18:54.828174    8584 command_runner.go:130] > daemonset.apps/kindnet configured
	I0229 02:18:54.829484    8584 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 02:18:54.830286    8584 kapi.go:59] client config for multinode-314500: &rest.Config{Host:"https://172.19.2.165:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2480600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 02:18:54.831290    8584 round_trippers.go:463] GET https://172.19.2.165:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0229 02:18:54.831290    8584 round_trippers.go:469] Request Headers:
	I0229 02:18:54.831374    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:18:54.831374    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:18:54.847724    8584 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0229 02:18:54.847724    8584 round_trippers.go:577] Response Headers:
	I0229 02:18:54.847724    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:18:54.847724    8584 round_trippers.go:580]     Content-Length: 291
	I0229 02:18:54.847724    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:18:55 GMT
	I0229 02:18:54.847724    8584 round_trippers.go:580]     Audit-Id: e12071b6-30c0-4d6d-9023-573b3f854ed4
	I0229 02:18:54.847724    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:18:54.847724    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:18:54.847724    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:18:54.848623    8584 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b4cd7015-a823-43da-bf82-ae91c5678262","resourceVersion":"439","creationTimestamp":"2024-02-29T02:15:51Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0229 02:18:54.848743    8584 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-314500" context rescaled to 1 replicas
	I0229 02:18:54.848818    8584 start.go:223] Will wait 6m0s for node &{Name:m02 IP:172.19.5.202 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0229 02:18:54.849622    8584 out.go:177] * Verifying Kubernetes components...
	I0229 02:18:54.859551    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:18:54.884779    8584 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 02:18:54.885357    8584 kapi.go:59] client config for multinode-314500: &rest.Config{Host:"https://172.19.2.165:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2480600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 02:18:54.886093    8584 node_ready.go:35] waiting up to 6m0s for node "multinode-314500-m02" to be "Ready" ...
	I0229 02:18:54.886178    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:18:54.886178    8584 round_trippers.go:469] Request Headers:
	I0229 02:18:54.886263    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:18:54.886292    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:18:54.889540    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:18:54.889540    8584 round_trippers.go:577] Response Headers:
	I0229 02:18:54.889540    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:18:54.889540    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:18:54.889540    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:18:54.889540    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:18:55 GMT
	I0229 02:18:54.889540    8584 round_trippers.go:580]     Audit-Id: 16a67bb6-f9fa-47dc-9acc-fded8dd1ddf0
	I0229 02:18:54.889540    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:18:54.890077    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:18:55.391661    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:18:55.391763    8584 round_trippers.go:469] Request Headers:
	I0229 02:18:55.391763    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:18:55.391763    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:18:55.397889    8584 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 02:18:55.397956    8584 round_trippers.go:577] Response Headers:
	I0229 02:18:55.397956    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:18:55.397956    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:18:55.398023    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:18:55.398023    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:18:55.398023    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:18:55 GMT
	I0229 02:18:55.398023    8584 round_trippers.go:580]     Audit-Id: 76e07a31-ea9d-45a0-bac4-b0a49382c981
	I0229 02:18:55.398637    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:18:55.894750    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:18:55.894865    8584 round_trippers.go:469] Request Headers:
	I0229 02:18:55.894865    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:18:55.894865    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:18:55.898265    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:18:55.898265    8584 round_trippers.go:577] Response Headers:
	I0229 02:18:55.898265    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:18:55.898265    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:18:55.898265    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:18:55.898265    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:18:56 GMT
	I0229 02:18:55.898265    8584 round_trippers.go:580]     Audit-Id: db33e390-9484-47f5-9023-d4f5140c6a73
	I0229 02:18:55.898265    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:18:55.899762    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:18:56.397336    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:18:56.397336    8584 round_trippers.go:469] Request Headers:
	I0229 02:18:56.397336    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:18:56.397336    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:18:56.400945    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:18:56.400945    8584 round_trippers.go:577] Response Headers:
	I0229 02:18:56.400945    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:18:56.401544    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:18:56.401544    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:18:56 GMT
	I0229 02:18:56.401544    8584 round_trippers.go:580]     Audit-Id: 7b5663c7-4127-436f-a916-f944f1a9362c
	I0229 02:18:56.401544    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:18:56.401544    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:18:56.401804    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:18:56.899952    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:18:56.899952    8584 round_trippers.go:469] Request Headers:
	I0229 02:18:56.899952    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:18:56.899952    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:18:56.913982    8584 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0229 02:18:56.913982    8584 round_trippers.go:577] Response Headers:
	I0229 02:18:56.913982    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:18:56.913982    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:18:56.913982    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:18:56.913982    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:18:56.913982    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:18:57 GMT
	I0229 02:18:56.913982    8584 round_trippers.go:580]     Audit-Id: 8cfef47d-31b6-4936-8599-942d267d5c62
	I0229 02:18:56.916795    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:18:56.917437    8584 node_ready.go:58] node "multinode-314500-m02" has status "Ready":"False"
	I0229 02:18:57.388540    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:18:57.388540    8584 round_trippers.go:469] Request Headers:
	I0229 02:18:57.388540    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:18:57.388540    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:18:57.392537    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:18:57.392537    8584 round_trippers.go:577] Response Headers:
	I0229 02:18:57.392537    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:18:57.392537    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:18:57 GMT
	I0229 02:18:57.392537    8584 round_trippers.go:580]     Audit-Id: 9689fe16-0d2b-45b2-bb7b-66bf24615cf8
	I0229 02:18:57.392537    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:18:57.392537    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:18:57.392537    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:18:57.392737    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:18:57.905825    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:18:57.905825    8584 round_trippers.go:469] Request Headers:
	I0229 02:18:57.905825    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:18:57.905825    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:18:57.909488    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:18:57.909488    8584 round_trippers.go:577] Response Headers:
	I0229 02:18:57.909488    8584 round_trippers.go:580]     Audit-Id: 93cc2139-334a-44b0-a008-1bab083e526a
	I0229 02:18:57.910054    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:18:57.910054    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:18:57.910054    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:18:57.910054    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:18:57.910054    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:18:58 GMT
	I0229 02:18:57.910054    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:18:58.400349    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:18:58.400349    8584 round_trippers.go:469] Request Headers:
	I0229 02:18:58.400349    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:18:58.400349    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:18:58.404938    8584 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:18:58.404938    8584 round_trippers.go:577] Response Headers:
	I0229 02:18:58.404938    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:18:58.404938    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:18:58.404938    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:18:58 GMT
	I0229 02:18:58.404938    8584 round_trippers.go:580]     Audit-Id: 6e7258bb-b00b-4e60-87e5-7b6336f44acf
	I0229 02:18:58.405337    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:18:58.405337    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:18:58.406994    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:18:58.888065    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:18:58.888104    8584 round_trippers.go:469] Request Headers:
	I0229 02:18:58.888154    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:18:58.888154    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:18:58.892109    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:18:58.892515    8584 round_trippers.go:577] Response Headers:
	I0229 02:18:58.892515    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:18:58.892515    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:18:58.892515    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:18:58.892515    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:18:58.892515    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:18:59 GMT
	I0229 02:18:58.892515    8584 round_trippers.go:580]     Audit-Id: a77bafa9-ce1a-4082-a191-10262cf4fc99
	I0229 02:18:58.892786    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:18:59.391822    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:18:59.391822    8584 round_trippers.go:469] Request Headers:
	I0229 02:18:59.391822    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:18:59.391822    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:18:59.397773    8584 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 02:18:59.397840    8584 round_trippers.go:577] Response Headers:
	I0229 02:18:59.397878    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:18:59.397878    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:18:59.397878    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:18:59.397878    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:18:59.397878    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:18:59 GMT
	I0229 02:18:59.397878    8584 round_trippers.go:580]     Audit-Id: f98af41c-d5cf-447b-97f9-e89ff1495066
	I0229 02:18:59.398819    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:18:59.399208    8584 node_ready.go:58] node "multinode-314500-m02" has status "Ready":"False"
	I0229 02:18:59.899172    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:18:59.899172    8584 round_trippers.go:469] Request Headers:
	I0229 02:18:59.899241    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:18:59.899241    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:18:59.902652    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:18:59.902652    8584 round_trippers.go:577] Response Headers:
	I0229 02:18:59.902652    8584 round_trippers.go:580]     Audit-Id: 5f01caf7-30bf-495c-889c-847503d5df90
	I0229 02:18:59.902652    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:18:59.902652    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:18:59.902652    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:18:59.902652    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:18:59.902652    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:00 GMT
	I0229 02:18:59.903665    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:19:00.389363    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:00.389363    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:00.389363    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:00.389447    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:00.393244    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:19:00.393502    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:00.393502    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:00 GMT
	I0229 02:19:00.393502    8584 round_trippers.go:580]     Audit-Id: c46b7762-54e7-4b1c-bff0-200199beca33
	I0229 02:19:00.393502    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:00.393502    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:00.393502    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:00.393502    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:00.393735    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:19:00.896187    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:00.896187    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:00.896270    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:00.896270    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:00.906719    8584 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0229 02:19:00.906719    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:00.906719    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:01 GMT
	I0229 02:19:00.906719    8584 round_trippers.go:580]     Audit-Id: 51e07ad7-2bc2-406a-a4af-4f3e1efa975e
	I0229 02:19:00.906719    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:00.906719    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:00.906719    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:00.906719    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:00.906719    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:19:01.387637    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:01.387637    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:01.387637    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:01.387637    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:01.428791    8584 round_trippers.go:574] Response Status: 200 OK in 41 milliseconds
	I0229 02:19:01.429599    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:01.429599    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:01.429599    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:01.429599    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:01.429599    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:01 GMT
	I0229 02:19:01.429599    8584 round_trippers.go:580]     Audit-Id: 119db968-13f7-4535-8658-337189a296ea
	I0229 02:19:01.429599    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:01.430142    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:19:01.430583    8584 node_ready.go:58] node "multinode-314500-m02" has status "Ready":"False"
	I0229 02:19:01.888493    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:01.888493    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:01.888493    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:01.888493    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:01.891732    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:19:01.891732    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:01.891732    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:01.891732    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:01.891732    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:02 GMT
	I0229 02:19:01.891732    8584 round_trippers.go:580]     Audit-Id: 42832318-f25b-490f-aff7-877895b7a3ba
	I0229 02:19:01.892570    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:01.892570    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:01.892677    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:19:02.396657    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:02.396657    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:02.396657    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:02.396657    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:02.399223    8584 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:19:02.399223    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:02.399223    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:02.399223    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:02.399223    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:02.399223    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:02.399223    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:02 GMT
	I0229 02:19:02.399223    8584 round_trippers.go:580]     Audit-Id: 35d2dceb-2382-4616-b8e5-6a0d14e043ab
	I0229 02:19:02.400063    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:19:02.900535    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:02.900535    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:02.900535    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:02.900535    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:02.905068    8584 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:19:02.905068    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:02.905068    8584 round_trippers.go:580]     Audit-Id: ace017cd-ee9f-4bd0-9b52-397013c1b792
	I0229 02:19:02.905068    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:02.905068    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:02.905068    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:02.905068    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:02.905068    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:03 GMT
	I0229 02:19:02.905391    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:19:03.394230    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:03.394230    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:03.394230    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:03.394230    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:03.396650    8584 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:19:03.396650    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:03.396650    8584 round_trippers.go:580]     Audit-Id: 5481e7b5-4a4c-446d-a04a-bc2f56d87626
	I0229 02:19:03.396650    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:03.396650    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:03.396650    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:03.396650    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:03.396650    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:03 GMT
	I0229 02:19:03.397840    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:19:03.886639    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:03.886639    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:03.886639    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:03.886639    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:03.890655    8584 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:19:03.890716    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:03.890716    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:03.890716    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:03.890784    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:03.890784    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:03.890784    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:04 GMT
	I0229 02:19:03.890784    8584 round_trippers.go:580]     Audit-Id: e654a293-e86e-4326-8709-9c556c1b6a16
	I0229 02:19:03.890957    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"610","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0229 02:19:03.891302    8584 node_ready.go:58] node "multinode-314500-m02" has status "Ready":"False"
	I0229 02:19:04.395161    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:04.395161    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:04.395161    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:04.395161    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:04.398988    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:19:04.398988    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:04.398988    8584 round_trippers.go:580]     Audit-Id: 5bd0cf7c-4754-40ca-abc1-50d4188e1af1
	I0229 02:19:04.398988    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:04.398988    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:04.398988    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:04.399337    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:04.399337    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:04 GMT
	I0229 02:19:04.399498    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"610","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0229 02:19:04.900506    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:04.900506    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:04.900588    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:04.900588    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:04.904345    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:19:04.904345    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:04.904345    8584 round_trippers.go:580]     Audit-Id: ea6f6d91-b34a-498d-9365-83f52c171ba8
	I0229 02:19:04.904345    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:04.904345    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:04.904345    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:04.904345    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:04.904345    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:05 GMT
	I0229 02:19:04.905267    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"610","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0229 02:19:05.390945    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:05.391025    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:05.391025    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:05.391025    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:05.394999    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:19:05.395256    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:05.395256    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:05.395256    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:05 GMT
	I0229 02:19:05.395256    8584 round_trippers.go:580]     Audit-Id: 48db6fca-fdd8-4b8e-8acf-d8508f01bc99
	I0229 02:19:05.395256    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:05.395256    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:05.395256    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:05.395433    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"610","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0229 02:19:05.897185    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:05.897253    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:05.897253    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:05.897253    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:05.901327    8584 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:19:05.901327    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:05.901327    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:05.901327    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:05.901327    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:06 GMT
	I0229 02:19:05.901327    8584 round_trippers.go:580]     Audit-Id: cc504900-e223-4f88-81bf-24d20ae238cd
	I0229 02:19:05.901327    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:05.901327    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:05.901610    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"610","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0229 02:19:05.901610    8584 node_ready.go:58] node "multinode-314500-m02" has status "Ready":"False"
	I0229 02:19:06.399376    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:06.399376    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:06.399445    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:06.399445    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:06.402595    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:19:06.402595    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:06.402595    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:06.402595    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:06.402595    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:06.402595    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:06 GMT
	I0229 02:19:06.402595    8584 round_trippers.go:580]     Audit-Id: 9f0e2a8e-137c-4cc5-9263-1f23093b3170
	I0229 02:19:06.402595    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:06.403455    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"610","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0229 02:19:06.899253    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:06.899323    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:06.899323    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:06.899323    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:06.903424    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:19:06.903424    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:06.903485    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:06.903485    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:06.903485    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:06.903485    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:07 GMT
	I0229 02:19:06.903485    8584 round_trippers.go:580]     Audit-Id: 70639d9f-98b5-4954-9cc2-ddac86c9913d
	I0229 02:19:06.903485    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:06.903620    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"610","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0229 02:19:07.401908    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:07.401994    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:07.402081    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:07.402081    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:07.405358    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:19:07.405358    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:07.405358    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:07.405358    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:07.405358    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:07 GMT
	I0229 02:19:07.406237    8584 round_trippers.go:580]     Audit-Id: 76fc92ca-3360-4c4e-bd5f-1f7bf5cc52d9
	I0229 02:19:07.406237    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:07.406237    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:07.406494    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"610","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0229 02:19:07.888332    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:07.888410    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:07.888410    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:07.888410    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:07.894132    8584 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 02:19:07.894651    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:07.894736    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:08 GMT
	I0229 02:19:07.894736    8584 round_trippers.go:580]     Audit-Id: 1c3c1ce0-0769-425d-afd3-d1bd32756322
	I0229 02:19:07.894736    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:07.894736    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:07.894736    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:07.894736    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:07.894736    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"610","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0229 02:19:08.389430    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:08.389523    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:08.389523    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:08.389523    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:08.392857    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:19:08.392857    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:08.392857    8584 round_trippers.go:580]     Audit-Id: b2f601d5-c1ee-47a1-b56e-755a0c4ad649
	I0229 02:19:08.393710    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:08.393710    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:08.393710    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:08.393710    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:08.393710    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:08 GMT
	I0229 02:19:08.393710    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"610","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0229 02:19:08.393710    8584 node_ready.go:58] node "multinode-314500-m02" has status "Ready":"False"
	I0229 02:19:08.887326    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:08.887326    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:08.887326    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:08.887326    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:08.891027    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:19:08.891027    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:08.891027    8584 round_trippers.go:580]     Audit-Id: 0f021001-a406-44d6-94d8-93ef736fbe42
	I0229 02:19:08.891670    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:08.891670    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:08.891670    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:08.891670    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:08.891670    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:09 GMT
	I0229 02:19:08.892089    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"610","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0229 02:19:09.389425    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:09.389425    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:09.389425    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:09.389425    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:09.396421    8584 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 02:19:09.396421    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:09.396421    8584 round_trippers.go:580]     Audit-Id: 5fbac51a-70b7-4815-bf98-6c7af5b38950
	I0229 02:19:09.396421    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:09.396421    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:09.396421    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:09.396421    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:09.396421    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:09 GMT
	I0229 02:19:09.396421    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"610","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0229 02:19:09.894460    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:09.894728    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:09.894728    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:09.894728    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:09.898034    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:19:09.898034    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:09.898864    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:10 GMT
	I0229 02:19:09.898864    8584 round_trippers.go:580]     Audit-Id: 30013302-3f77-4414-bdaf-b073ae7cc7ad
	I0229 02:19:09.898864    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:09.898864    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:09.898864    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:09.898864    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:09.899055    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"622","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3255 chars]
	I0229 02:19:09.899681    8584 node_ready.go:49] node "multinode-314500-m02" has status "Ready":"True"
	I0229 02:19:09.899760    8584 node_ready.go:38] duration metric: took 15.0128311s waiting for node "multinode-314500-m02" to be "Ready" ...
	I0229 02:19:09.899760    8584 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:19:09.899988    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods
	I0229 02:19:09.899988    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:09.900078    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:09.900078    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:09.906930    8584 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 02:19:09.906930    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:09.906930    8584 round_trippers.go:580]     Audit-Id: 89da8e7e-82dd-4ddb-8b70-e96b345eeabf
	I0229 02:19:09.906930    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:09.906930    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:09.907383    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:09.907383    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:09.907383    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:10 GMT
	I0229 02:19:09.908247    8584 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"622"},"items":[{"metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"435","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67426 chars]
	I0229 02:19:09.910949    8584 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace to be "Ready" ...
	I0229 02:19:09.911270    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:19:09.911270    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:09.911270    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:09.911270    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:09.913489    8584 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:19:09.913489    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:09.914473    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:10 GMT
	I0229 02:19:09.914473    8584 round_trippers.go:580]     Audit-Id: 7bc2e034-8bca-4f19-a593-29d856effd79
	I0229 02:19:09.914473    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:09.914473    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:09.914473    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:09.914473    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:09.914473    8584 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"435","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6282 chars]
	I0229 02:19:09.914473    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:19:09.915219    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:09.915219    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:09.915219    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:09.917425    8584 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:19:09.917425    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:09.918440    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:09.918440    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:10 GMT
	I0229 02:19:09.918440    8584 round_trippers.go:580]     Audit-Id: 4809d0f9-91ec-4b02-b3ae-312c0e7cd898
	I0229 02:19:09.918440    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:09.918440    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:09.918440    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:09.918977    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"445","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4957 chars]
	I0229 02:19:09.919175    8584 pod_ready.go:92] pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace has status "Ready":"True"
	I0229 02:19:09.919175    8584 pod_ready.go:81] duration metric: took 7.9754ms waiting for pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace to be "Ready" ...
	I0229 02:19:09.919175    8584 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:19:09.919175    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-314500
	I0229 02:19:09.919700    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:09.919700    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:09.919700    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:09.921900    8584 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:19:09.922797    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:09.922797    8584 round_trippers.go:580]     Audit-Id: 99d5dfdd-529d-414a-bbab-ec3564725035
	I0229 02:19:09.922797    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:09.922797    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:09.922797    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:09.922797    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:09.922869    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:10 GMT
	I0229 02:19:09.922990    8584 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-314500","namespace":"kube-system","uid":"6fc42e7c-48f9-46df-bf2f-861e0684e37f","resourceVersion":"323","creationTimestamp":"2024-02-29T02:15:52Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.2.165:2379","kubernetes.io/config.hash":"0b84e88097a2b59a9c108b0f9fa2b889","kubernetes.io/config.mirror":"0b84e88097a2b59a9c108b0f9fa2b889","kubernetes.io/config.seen":"2024-02-29T02:15:52.221392786Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:15:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5852 chars]
	I0229 02:19:09.923537    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:19:09.923537    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:09.923537    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:09.923537    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:09.926234    8584 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:19:09.926984    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:09.926984    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:10 GMT
	I0229 02:19:09.926984    8584 round_trippers.go:580]     Audit-Id: df2638c9-ac54-4653-bb22-db74ffa3024c
	I0229 02:19:09.926984    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:09.926984    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:09.926984    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:09.926984    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:09.927160    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"445","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4957 chars]
	I0229 02:19:09.927439    8584 pod_ready.go:92] pod "etcd-multinode-314500" in "kube-system" namespace has status "Ready":"True"
	I0229 02:19:09.927439    8584 pod_ready.go:81] duration metric: took 8.2637ms waiting for pod "etcd-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:19:09.927439    8584 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:19:09.927439    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-314500
	I0229 02:19:09.927439    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:09.927439    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:09.927439    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:09.930125    8584 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:19:09.930125    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:09.930125    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:10 GMT
	I0229 02:19:09.930125    8584 round_trippers.go:580]     Audit-Id: 7d1cb678-5653-4d94-81c2-91c8fa733734
	I0229 02:19:09.930125    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:09.930125    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:09.930125    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:09.930125    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:09.931265    8584 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-314500","namespace":"kube-system","uid":"fc266082-ff2c-4bd1-951f-11dc765a28ae","resourceVersion":"303","creationTimestamp":"2024-02-29T02:15:52Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.2.165:8443","kubernetes.io/config.hash":"75abc10fab898952206cc1d682d3c922","kubernetes.io/config.mirror":"75abc10fab898952206cc1d682d3c922","kubernetes.io/config.seen":"2024-02-29T02:15:52.221397486Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:15:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7390 chars]
	I0229 02:19:09.931368    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:19:09.931368    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:09.931368    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:09.931368    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:09.933978    8584 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:19:09.933978    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:09.933978    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:10 GMT
	I0229 02:19:09.934688    8584 round_trippers.go:580]     Audit-Id: d82b569f-a41c-4dec-b10e-f07a48060338
	I0229 02:19:09.934688    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:09.934688    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:09.934688    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:09.934688    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:09.934688    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"445","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4957 chars]
	I0229 02:19:09.935545    8584 pod_ready.go:92] pod "kube-apiserver-multinode-314500" in "kube-system" namespace has status "Ready":"True"
	I0229 02:19:09.935545    8584 pod_ready.go:81] duration metric: took 8.1061ms waiting for pod "kube-apiserver-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:19:09.935605    8584 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:19:09.935677    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-314500
	I0229 02:19:09.935677    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:09.935677    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:09.935677    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:09.938290    8584 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:19:09.938290    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:09.938290    8584 round_trippers.go:580]     Audit-Id: f1c6fb4d-9811-4d1e-b351-72c1daa1ec71
	I0229 02:19:09.938290    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:09.938290    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:09.938290    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:09.938290    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:09.938290    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:10 GMT
	I0229 02:19:09.938290    8584 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-314500","namespace":"kube-system","uid":"58e57902-e113-44a9-b5b5-4aba2ba13491","resourceVersion":"302","creationTimestamp":"2024-02-29T02:15:52Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"46f4a0cce9ca64e19c1ad09d6f30ce1e","kubernetes.io/config.mirror":"46f4a0cce9ca64e19c1ad09d6f30ce1e","kubernetes.io/config.seen":"2024-02-29T02:15:52.221398986Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:15:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6965 chars]
	I0229 02:19:09.939348    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:19:09.939348    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:09.939348    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:09.939348    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:09.943696    8584 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:19:09.943696    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:09.943696    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:10 GMT
	I0229 02:19:09.943696    8584 round_trippers.go:580]     Audit-Id: 8b6a9ffa-4316-4827-a442-9ff4f30d586a
	I0229 02:19:09.943696    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:09.943918    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:09.943918    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:09.943918    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:09.944022    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"445","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4957 chars]
	I0229 02:19:09.944022    8584 pod_ready.go:92] pod "kube-controller-manager-multinode-314500" in "kube-system" namespace has status "Ready":"True"
	I0229 02:19:09.944022    8584 pod_ready.go:81] duration metric: took 8.417ms waiting for pod "kube-controller-manager-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:19:09.944022    8584 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4gbrl" in "kube-system" namespace to be "Ready" ...
	I0229 02:19:10.097935    8584 request.go:629] Waited for 152.8628ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4gbrl
	I0229 02:19:10.098174    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4gbrl
	I0229 02:19:10.098219    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:10.098219    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:10.098219    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:10.104877    8584 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 02:19:10.104877    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:10.104877    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:10.104877    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:10.104877    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:10.104877    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:10.104877    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:10 GMT
	I0229 02:19:10.104877    8584 round_trippers.go:580]     Audit-Id: 9eb11c5e-881c-42bc-9be1-5f24ca6abc36
	I0229 02:19:10.105667    8584 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4gbrl","generateName":"kube-proxy-","namespace":"kube-system","uid":"accb56cb-79ee-4f16-b05e-91bf554c4a60","resourceVersion":"606","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"99934fe5-0d72-4e83-8f59-4a0b59969008","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"99934fe5-0d72-4e83-8f59-4a0b59969008\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I0229 02:19:10.301343    8584 request.go:629] Waited for 194.8528ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:10.301407    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:10.301407    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:10.301407    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:10.301407    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:10.304982    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:19:10.304982    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:10.304982    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:10.304982    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:10.305600    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:10.305600    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:10 GMT
	I0229 02:19:10.305600    8584 round_trippers.go:580]     Audit-Id: c18086a1-3697-45c4-8944-d8d7689207d6
	I0229 02:19:10.305600    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:10.305690    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"622","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3255 chars]
	I0229 02:19:10.306238    8584 pod_ready.go:92] pod "kube-proxy-4gbrl" in "kube-system" namespace has status "Ready":"True"
	I0229 02:19:10.306384    8584 pod_ready.go:81] duration metric: took 362.2941ms waiting for pod "kube-proxy-4gbrl" in "kube-system" namespace to be "Ready" ...
	I0229 02:19:10.306444    8584 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6r6j4" in "kube-system" namespace to be "Ready" ...
	I0229 02:19:10.504518    8584 request.go:629] Waited for 197.6938ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6r6j4
	I0229 02:19:10.504606    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6r6j4
	I0229 02:19:10.504682    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:10.504682    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:10.504682    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:10.511019    8584 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 02:19:10.511019    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:10.511019    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:10.511019    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:10.511019    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:10.511019    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:10.511019    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:10 GMT
	I0229 02:19:10.511019    8584 round_trippers.go:580]     Audit-Id: 4ae6576d-36bf-4327-85f5-11b14661f5ab
	I0229 02:19:10.511729    8584 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6r6j4","generateName":"kube-proxy-","namespace":"kube-system","uid":"2b84b22d-3786-4f9e-a23a-c7cfc93bb671","resourceVersion":"394","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"99934fe5-0d72-4e83-8f59-4a0b59969008","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"99934fe5-0d72-4e83-8f59-4a0b59969008\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
	I0229 02:19:10.706346    8584 request.go:629] Waited for 193.8669ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:19:10.706642    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:19:10.706642    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:10.706642    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:10.706642    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:10.712840    8584 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 02:19:10.712895    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:10.712978    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:10.713002    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:10 GMT
	I0229 02:19:10.713002    8584 round_trippers.go:580]     Audit-Id: c5866151-f886-44d1-8800-b5f13dbf5b70
	I0229 02:19:10.713002    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:10.713002    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:10.713002    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:10.713002    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"445","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4957 chars]
	I0229 02:19:10.713751    8584 pod_ready.go:92] pod "kube-proxy-6r6j4" in "kube-system" namespace has status "Ready":"True"
	I0229 02:19:10.713751    8584 pod_ready.go:81] duration metric: took 407.2841ms waiting for pod "kube-proxy-6r6j4" in "kube-system" namespace to be "Ready" ...
	I0229 02:19:10.713751    8584 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:19:10.908577    8584 request.go:629] Waited for 194.7255ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-314500
	I0229 02:19:10.908997    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-314500
	I0229 02:19:10.908997    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:10.908997    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:10.908997    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:10.912468    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:19:10.912468    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:10.912468    8584 round_trippers.go:580]     Audit-Id: a20a5e1a-e0b4-47eb-ab35-b1c357c97ae2
	I0229 02:19:10.912468    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:10.912468    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:10.912468    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:10.912468    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:10.912468    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:11 GMT
	I0229 02:19:10.913104    8584 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-314500","namespace":"kube-system","uid":"31fcecc6-17de-43a6-892d-37cd915de64b","resourceVersion":"288","creationTimestamp":"2024-02-29T02:15:52Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"3d9a79ff068a0922524863a8caa5053a","kubernetes.io/config.mirror":"3d9a79ff068a0922524863a8caa5053a","kubernetes.io/config.seen":"2024-02-29T02:15:52.221399886Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:15:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4695 chars]
	I0229 02:19:11.095871    8584 request.go:629] Waited for 181.8524ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:19:11.096146    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:19:11.096146    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:11.096146    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:11.096146    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:11.104050    8584 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 02:19:11.104316    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:11.104316    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:11 GMT
	I0229 02:19:11.104316    8584 round_trippers.go:580]     Audit-Id: 7e9bd965-e810-45bc-85a8-4bb609661efb
	I0229 02:19:11.104316    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:11.104316    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:11.104316    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:11.104368    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:11.104637    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"445","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4957 chars]
	I0229 02:19:11.105147    8584 pod_ready.go:92] pod "kube-scheduler-multinode-314500" in "kube-system" namespace has status "Ready":"True"
	I0229 02:19:11.105147    8584 pod_ready.go:81] duration metric: took 391.3742ms waiting for pod "kube-scheduler-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:19:11.105147    8584 pod_ready.go:38] duration metric: took 1.2053198s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:19:11.105147    8584 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 02:19:11.114287    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:19:11.138275    8584 system_svc.go:56] duration metric: took 33.1261ms WaitForService to wait for kubelet.
	I0229 02:19:11.138407    8584 kubeadm.go:581] duration metric: took 16.2886816s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 02:19:11.138478    8584 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:19:11.300588    8584 request.go:629] Waited for 161.8606ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.165:8443/api/v1/nodes
	I0229 02:19:11.300980    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes
	I0229 02:19:11.300980    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:11.300980    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:11.300980    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:11.304358    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:19:11.304358    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:11.304358    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:11 GMT
	I0229 02:19:11.304358    8584 round_trippers.go:580]     Audit-Id: 51c168c1-a4fe-434a-973b-2f988dadac6f
	I0229 02:19:11.304358    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:11.304358    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:11.304358    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:11.304358    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:11.305480    8584 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"624"},"items":[{"metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"445","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 9257 chars]
	I0229 02:19:11.306090    8584 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:19:11.306162    8584 node_conditions.go:123] node cpu capacity is 2
	I0229 02:19:11.306162    8584 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:19:11.306162    8584 node_conditions.go:123] node cpu capacity is 2
	I0229 02:19:11.306162    8584 node_conditions.go:105] duration metric: took 167.6741ms to run NodePressure ...
	I0229 02:19:11.306162    8584 start.go:228] waiting for startup goroutines ...
	I0229 02:19:11.306266    8584 start.go:242] writing updated cluster config ...
	I0229 02:19:11.315752    8584 ssh_runner.go:195] Run: rm -f paused
	I0229 02:19:11.444114    8584 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 02:19:11.444987    8584 out.go:177] * Done! kubectl is now configured to use "multinode-314500" cluster and "default" namespace by default
	
	
	==> Docker <==
	Feb 29 02:16:16 multinode-314500 dockerd[1292]: time="2024-02-29T02:16:16.836943598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 02:16:16 multinode-314500 dockerd[1292]: time="2024-02-29T02:16:16.844762626Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 02:16:16 multinode-314500 dockerd[1292]: time="2024-02-29T02:16:16.844839230Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 02:16:16 multinode-314500 dockerd[1292]: time="2024-02-29T02:16:16.844857831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 02:16:16 multinode-314500 dockerd[1292]: time="2024-02-29T02:16:16.845360758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 02:16:16 multinode-314500 cri-dockerd[1179]: time="2024-02-29T02:16:16Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/13f6ae46b7d00cb80295b3fe4d8eaa84529c5242f022e3b07bef994969a9441e/resolv.conf as [nameserver 172.19.0.1]"
	Feb 29 02:16:17 multinode-314500 cri-dockerd[1179]: time="2024-02-29T02:16:17Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8c944d91b62504f7fd894d21889df5d67be765e4f02c1950a7a2a05132205f99/resolv.conf as [nameserver 172.19.0.1]"
	Feb 29 02:16:17 multinode-314500 dockerd[1292]: time="2024-02-29T02:16:17.077064890Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 02:16:17 multinode-314500 dockerd[1292]: time="2024-02-29T02:16:17.077136794Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 02:16:17 multinode-314500 dockerd[1292]: time="2024-02-29T02:16:17.077154495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 02:16:17 multinode-314500 dockerd[1292]: time="2024-02-29T02:16:17.077248800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 02:16:17 multinode-314500 dockerd[1292]: time="2024-02-29T02:16:17.216491649Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 02:16:17 multinode-314500 dockerd[1292]: time="2024-02-29T02:16:17.216758964Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 02:16:17 multinode-314500 dockerd[1292]: time="2024-02-29T02:16:17.217093082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 02:16:17 multinode-314500 dockerd[1292]: time="2024-02-29T02:16:17.217451101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 02:19:35 multinode-314500 dockerd[1292]: time="2024-02-29T02:19:35.111682320Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 02:19:35 multinode-314500 dockerd[1292]: time="2024-02-29T02:19:35.112609163Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 02:19:35 multinode-314500 dockerd[1292]: time="2024-02-29T02:19:35.112830174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 02:19:35 multinode-314500 dockerd[1292]: time="2024-02-29T02:19:35.113067885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 02:19:35 multinode-314500 cri-dockerd[1179]: time="2024-02-29T02:19:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ffe504a01e326c3100f593c8c5221a31307571eedec738e86cb135ea892fdda2/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Feb 29 02:19:36 multinode-314500 cri-dockerd[1179]: time="2024-02-29T02:19:36Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Feb 29 02:19:36 multinode-314500 dockerd[1292]: time="2024-02-29T02:19:36.486937597Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 02:19:36 multinode-314500 dockerd[1292]: time="2024-02-29T02:19:36.487123907Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 02:19:36 multinode-314500 dockerd[1292]: time="2024-02-29T02:19:36.487169510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 02:19:36 multinode-314500 dockerd[1292]: time="2024-02-29T02:19:36.487422023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	56fdd268ee231       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   4 minutes ago       Running             busybox                   0                   ffe504a01e326       busybox-5b5d89c9d6-qcblm
	11c14ebdfaf67       ead0a4a53df89                                                                                         7 minutes ago       Running             coredns                   0                   8c944d91b6250       coredns-5dd5756b68-8g6tg
	cf65b06d29a0d       6e38f40d628db                                                                                         8 minutes ago       Running             storage-provisioner       0                   13f6ae46b7d00       storage-provisioner
	dd61788b0a0d8       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              8 minutes ago       Running             kindnet-cni               0                   edb41bd5e75d4       kindnet-t9r77
	c93e331307466       83f6cc407eed8                                                                                         8 minutes ago       Running             kube-proxy                0                   4b10f8bd940b8       kube-proxy-6r6j4
	e5bc2b41493bf       73deb9a3f7025                                                                                         8 minutes ago       Running             etcd                      0                   b93004a3ca704       etcd-multinode-314500
	ab0c4864aee58       e3db313c6dbc0                                                                                         8 minutes ago       Running             kube-scheduler            0                   bf7b9750ae9ea       kube-scheduler-multinode-314500
	26b1ab05f99a9       d058aa5ab969c                                                                                         8 minutes ago       Running             kube-controller-manager   0                   96810146c69cf       kube-controller-manager-multinode-314500
	9815e253e1a06       7fe0e6f37db33                                                                                         8 minutes ago       Running             kube-apiserver            0                   2d13a46d83899       kube-apiserver-multinode-314500
	
	
	==> coredns [11c14ebdfaf6] <==
	[INFO] 10.244.1.2:39886 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00019781s
	[INFO] 10.244.0.3:51772 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000254814s
	[INFO] 10.244.0.3:55803 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000074704s
	[INFO] 10.244.0.3:52953 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000063204s
	[INFO] 10.244.0.3:35356 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000217512s
	[INFO] 10.244.0.3:51868 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000073604s
	[INFO] 10.244.0.3:43420 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000103505s
	[INFO] 10.244.0.3:51899 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000210611s
	[INFO] 10.244.0.3:56850 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00018761s
	[INFO] 10.244.1.2:34482 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097705s
	[INFO] 10.244.1.2:36018 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000150108s
	[INFO] 10.244.1.2:50932 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064203s
	[INFO] 10.244.1.2:38051 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000129007s
	[INFO] 10.244.0.3:41360 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000316917s
	[INFO] 10.244.0.3:60778 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000160008s
	[INFO] 10.244.0.3:57010 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000133407s
	[INFO] 10.244.0.3:43292 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000127407s
	[INFO] 10.244.1.2:34858 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135708s
	[INFO] 10.244.1.2:60624 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000269714s
	[INFO] 10.244.1.2:46116 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000100405s
	[INFO] 10.244.1.2:57306 - 5 "PTR IN 1.0.19.172.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 100 0.000138608s
	[INFO] 10.244.0.3:57177 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000084804s
	[INFO] 10.244.0.3:55463 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000274415s
	[INFO] 10.244.0.3:36032 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000185809s
	[INFO] 10.244.0.3:42058 - 5 "PTR IN 1.0.19.172.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 100 0.000083604s
	
	
	==> describe nodes <==
	Name:               multinode-314500
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-314500
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61
	                    minikube.k8s.io/name=multinode-314500
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_29T02_15_53_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 02:15:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-314500
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Feb 2024 02:24:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 02:19:58 +0000   Thu, 29 Feb 2024 02:15:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 02:19:58 +0000   Thu, 29 Feb 2024 02:15:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 02:19:58 +0000   Thu, 29 Feb 2024 02:15:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Feb 2024 02:19:58 +0000   Thu, 29 Feb 2024 02:16:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.2.165
	  Hostname:    multinode-314500
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 fcca135ba85d4e2a802ef18b508e0e63
	  System UUID:                d0919ea2-7b7b-e246-9348-925d639776b8
	  Boot ID:                    2a7c10fd-1651-4220-b9f5-aa3595c1b1ae
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-qcblm                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m42s
	  kube-system                 coredns-5dd5756b68-8g6tg                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m11s
	  kube-system                 etcd-multinode-314500                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m24s
	  kube-system                 kindnet-t9r77                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m11s
	  kube-system                 kube-apiserver-multinode-314500             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m24s
	  kube-system                 kube-controller-manager-multinode-314500    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m24s
	  kube-system                 kube-proxy-6r6j4                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m11s
	  kube-system                 kube-scheduler-multinode-314500             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m24s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m8s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  8m33s (x8 over 8m33s)  kubelet          Node multinode-314500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m33s (x8 over 8m33s)  kubelet          Node multinode-314500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m33s (x7 over 8m33s)  kubelet          Node multinode-314500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 8m24s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m24s                  kubelet          Node multinode-314500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m24s                  kubelet          Node multinode-314500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m24s                  kubelet          Node multinode-314500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8m12s                  node-controller  Node multinode-314500 event: Registered Node multinode-314500 in Controller
	  Normal  NodeReady                8m                     kubelet          Node multinode-314500 status is now: NodeReady
	
	
	Name:               multinode-314500-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-314500-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61
	                    minikube.k8s.io/name=multinode-314500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_02_29T02_18_54_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 02:18:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-314500-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Feb 2024 02:24:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 02:19:54 +0000   Thu, 29 Feb 2024 02:18:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 02:19:54 +0000   Thu, 29 Feb 2024 02:18:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 02:19:54 +0000   Thu, 29 Feb 2024 02:18:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Feb 2024 02:19:54 +0000   Thu, 29 Feb 2024 02:19:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.5.202
	  Hostname:    multinode-314500-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 77aee02c4bee424dbfd3564939d0a240
	  System UUID:                b1627b4d-7d75-ed47-9ee8-e9d67e74df72
	  Boot ID:                    87f7a67a-8d8e-41a1-ae90-0f8737e86f14
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-826w2    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m42s
	  kube-system                 kindnet-6r7b8               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m23s
	  kube-system                 kube-proxy-4gbrl            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m14s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m23s (x5 over 5m25s)  kubelet          Node multinode-314500-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m23s (x5 over 5m25s)  kubelet          Node multinode-314500-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m23s (x5 over 5m25s)  kubelet          Node multinode-314500-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m22s                  node-controller  Node multinode-314500-m02 event: Registered Node multinode-314500-m02 in Controller
	  Normal  NodeReady                5m7s                   kubelet          Node multinode-314500-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +1.779304] systemd-fstab-generator[113]: Ignoring "noauto" option for root device
	[Feb29 02:14] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +40.611904] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.181228] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[Feb29 02:15] systemd-fstab-generator[907]: Ignoring "noauto" option for root device
	[  +0.106381] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.524061] systemd-fstab-generator[948]: Ignoring "noauto" option for root device
	[  +0.195671] systemd-fstab-generator[960]: Ignoring "noauto" option for root device
	[  +0.235266] systemd-fstab-generator[974]: Ignoring "noauto" option for root device
	[  +1.802878] systemd-fstab-generator[1132]: Ignoring "noauto" option for root device
	[  +0.200825] systemd-fstab-generator[1144]: Ignoring "noauto" option for root device
	[  +0.187739] systemd-fstab-generator[1156]: Ignoring "noauto" option for root device
	[  +0.272932] systemd-fstab-generator[1171]: Ignoring "noauto" option for root device
	[ +12.596345] systemd-fstab-generator[1278]: Ignoring "noauto" option for root device
	[  +0.100135] kauditd_printk_skb: 205 callbacks suppressed
	[  +9.124872] systemd-fstab-generator[1655]: Ignoring "noauto" option for root device
	[  +0.104351] kauditd_printk_skb: 51 callbacks suppressed
	[  +8.767706] systemd-fstab-generator[2631]: Ignoring "noauto" option for root device
	[  +0.137526] kauditd_printk_skb: 62 callbacks suppressed
	[Feb29 02:16] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.600907] kauditd_printk_skb: 29 callbacks suppressed
	[Feb29 02:19] hrtimer: interrupt took 2175903 ns
	[  +0.988605] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [e5bc2b41493b] <==
	{"level":"info","ts":"2024-02-29T02:15:45.444825Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"288caba846397842 switched to configuration voters=(2921898997477636162)"}
	{"level":"info","ts":"2024-02-29T02:15:45.449232Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b70ab9772a44d22c","local-member-id":"288caba846397842","added-peer-id":"288caba846397842","added-peer-peer-urls":["https://172.19.2.165:2380"]}
	{"level":"info","ts":"2024-02-29T02:15:45.445002Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.19.2.165:2380"}
	{"level":"info","ts":"2024-02-29T02:15:45.451781Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"288caba846397842","initial-advertise-peer-urls":["https://172.19.2.165:2380"],"listen-peer-urls":["https://172.19.2.165:2380"],"advertise-client-urls":["https://172.19.2.165:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.19.2.165:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-02-29T02:15:45.451813Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-29T02:15:45.456207Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.19.2.165:2380"}
	{"level":"info","ts":"2024-02-29T02:15:46.279614Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"288caba846397842 is starting a new election at term 1"}
	{"level":"info","ts":"2024-02-29T02:15:46.279927Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"288caba846397842 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-02-29T02:15:46.280297Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"288caba846397842 received MsgPreVoteResp from 288caba846397842 at term 1"}
	{"level":"info","ts":"2024-02-29T02:15:46.280432Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"288caba846397842 became candidate at term 2"}
	{"level":"info","ts":"2024-02-29T02:15:46.280578Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"288caba846397842 received MsgVoteResp from 288caba846397842 at term 2"}
	{"level":"info","ts":"2024-02-29T02:15:46.280732Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"288caba846397842 became leader at term 2"}
	{"level":"info","ts":"2024-02-29T02:15:46.280856Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 288caba846397842 elected leader 288caba846397842 at term 2"}
	{"level":"info","ts":"2024-02-29T02:15:46.285663Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T02:15:46.289486Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"288caba846397842","local-member-attributes":"{Name:multinode-314500 ClientURLs:[https://172.19.2.165:2379]}","request-path":"/0/members/288caba846397842/attributes","cluster-id":"b70ab9772a44d22c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-29T02:15:46.289834Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T02:15:46.292192Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-29T02:15:46.295691Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b70ab9772a44d22c","local-member-id":"288caba846397842","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T02:15:46.29636Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T02:15:46.296607Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T02:15:46.295902Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T02:15:46.298395Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.19.2.165:2379"}
	{"level":"info","ts":"2024-02-29T02:15:46.344121Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-29T02:15:46.352275Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-29T02:19:03.699393Z","caller":"traceutil/trace.go:171","msg":"trace[2003273810] transaction","detail":"{read_only:false; response_revision:609; number_of_response:1; }","duration":"117.265217ms","start":"2024-02-29T02:19:03.582107Z","end":"2024-02-29T02:19:03.699373Z","steps":["trace[2003273810] 'process raft request'  (duration: 117.135811ms)"],"step_count":1}
	
	
	==> kernel <==
	 02:24:16 up 10 min,  0 users,  load average: 0.30, 0.31, 0.18
	Linux multinode-314500 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [dd61788b0a0d] <==
	I0229 02:23:12.795950       1 main.go:250] Node multinode-314500-m02 has CIDR [10.244.1.0/24] 
	I0229 02:23:22.802838       1 main.go:223] Handling node with IPs: map[172.19.2.165:{}]
	I0229 02:23:22.802874       1 main.go:227] handling current node
	I0229 02:23:22.802885       1 main.go:223] Handling node with IPs: map[172.19.5.202:{}]
	I0229 02:23:22.802892       1 main.go:250] Node multinode-314500-m02 has CIDR [10.244.1.0/24] 
	I0229 02:23:32.817016       1 main.go:223] Handling node with IPs: map[172.19.2.165:{}]
	I0229 02:23:32.817138       1 main.go:227] handling current node
	I0229 02:23:32.817153       1 main.go:223] Handling node with IPs: map[172.19.5.202:{}]
	I0229 02:23:32.817161       1 main.go:250] Node multinode-314500-m02 has CIDR [10.244.1.0/24] 
	I0229 02:23:42.823749       1 main.go:223] Handling node with IPs: map[172.19.2.165:{}]
	I0229 02:23:42.823850       1 main.go:227] handling current node
	I0229 02:23:42.823863       1 main.go:223] Handling node with IPs: map[172.19.5.202:{}]
	I0229 02:23:42.823870       1 main.go:250] Node multinode-314500-m02 has CIDR [10.244.1.0/24] 
	I0229 02:23:52.838297       1 main.go:223] Handling node with IPs: map[172.19.2.165:{}]
	I0229 02:23:52.838412       1 main.go:227] handling current node
	I0229 02:23:52.838426       1 main.go:223] Handling node with IPs: map[172.19.5.202:{}]
	I0229 02:23:52.838434       1 main.go:250] Node multinode-314500-m02 has CIDR [10.244.1.0/24] 
	I0229 02:24:02.844752       1 main.go:223] Handling node with IPs: map[172.19.2.165:{}]
	I0229 02:24:02.844848       1 main.go:227] handling current node
	I0229 02:24:02.844862       1 main.go:223] Handling node with IPs: map[172.19.5.202:{}]
	I0229 02:24:02.844869       1 main.go:250] Node multinode-314500-m02 has CIDR [10.244.1.0/24] 
	I0229 02:24:12.851420       1 main.go:223] Handling node with IPs: map[172.19.2.165:{}]
	I0229 02:24:12.851513       1 main.go:227] handling current node
	I0229 02:24:12.851555       1 main.go:223] Handling node with IPs: map[172.19.5.202:{}]
	I0229 02:24:12.851564       1 main.go:250] Node multinode-314500-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [9815e253e1a0] <==
	I0229 02:15:48.203853       1 cache.go:39] Caches are synced for autoregister controller
	I0229 02:15:48.232330       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0229 02:15:48.232740       1 shared_informer.go:318] Caches are synced for configmaps
	I0229 02:15:48.234868       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0229 02:15:48.236962       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0229 02:15:48.238608       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0229 02:15:48.238634       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0229 02:15:48.240130       1 controller.go:624] quota admission added evaluator for: namespaces
	I0229 02:15:48.259371       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0229 02:15:48.288795       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0229 02:15:49.050665       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0229 02:15:49.064719       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0229 02:15:49.064738       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0229 02:15:49.909107       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0229 02:15:49.978633       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0229 02:15:50.069966       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0229 02:15:50.082357       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.19.2.165]
	I0229 02:15:50.083992       1 controller.go:624] quota admission added evaluator for: endpoints
	I0229 02:15:50.090388       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0229 02:15:50.155063       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0229 02:15:51.998918       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0229 02:15:52.011885       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0229 02:15:52.026788       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0229 02:16:05.076718       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0229 02:16:05.263867       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [26b1ab05f99a] <==
	I0229 02:16:05.737501       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="69.104µs"
	I0229 02:16:16.382507       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="902.949µs"
	I0229 02:16:16.409455       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="67.604µs"
	I0229 02:16:17.774033       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="155.809µs"
	I0229 02:16:17.862409       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="36.897ms"
	I0229 02:16:17.868791       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="83.404µs"
	I0229 02:16:19.467304       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0229 02:18:53.354208       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-314500-m02\" does not exist"
	I0229 02:18:53.368926       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-314500-m02" podCIDRs=["10.244.1.0/24"]
	I0229 02:18:53.372475       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-4gbrl"
	I0229 02:18:53.376875       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-6r7b8"
	I0229 02:18:54.492680       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-314500-m02"
	I0229 02:18:54.493161       1 event.go:307] "Event occurred" object="multinode-314500-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-314500-m02 event: Registered Node multinode-314500-m02 in Controller"
	I0229 02:19:09.849595       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-314500-m02"
	I0229 02:19:34.656812       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5b5d89c9d6 to 2"
	I0229 02:19:34.678854       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-826w2"
	I0229 02:19:34.689390       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-qcblm"
	I0229 02:19:34.698278       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="40.961829ms"
	I0229 02:19:34.725163       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="26.446345ms"
	I0229 02:19:34.739405       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="13.836452ms"
	I0229 02:19:34.740025       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="46.602µs"
	I0229 02:19:36.713325       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="8.816271ms"
	I0229 02:19:36.713610       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="108.606µs"
	I0229 02:19:37.478878       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="9.961832ms"
	I0229 02:19:37.479378       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="145.408µs"
	
	
	==> kube-proxy [c93e33130746] <==
	I0229 02:16:07.488822       1 server_others.go:69] "Using iptables proxy"
	I0229 02:16:07.511408       1 node.go:141] Successfully retrieved node IP: 172.19.2.165
	I0229 02:16:07.646052       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0229 02:16:07.646080       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0229 02:16:07.652114       1 server_others.go:152] "Using iptables Proxier"
	I0229 02:16:07.652346       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0229 02:16:07.652698       1 server.go:846] "Version info" version="v1.28.4"
	I0229 02:16:07.652712       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 02:16:07.654751       1 config.go:188] "Starting service config controller"
	I0229 02:16:07.655126       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0229 02:16:07.655241       1 config.go:97] "Starting endpoint slice config controller"
	I0229 02:16:07.655327       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0229 02:16:07.656324       1 config.go:315] "Starting node config controller"
	I0229 02:16:07.676099       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0229 02:16:07.679653       1 shared_informer.go:318] Caches are synced for node config
	I0229 02:16:07.757691       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0229 02:16:07.757737       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [ab0c4864aee5] <==
	W0229 02:15:48.237220       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0229 02:15:48.237295       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0229 02:15:49.044071       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0229 02:15:49.044214       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0229 02:15:49.085996       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0229 02:15:49.086626       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0229 02:15:49.106158       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0229 02:15:49.106848       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0229 02:15:49.126181       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0229 02:15:49.126580       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0229 02:15:49.196878       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0229 02:15:49.196987       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0229 02:15:49.236282       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0229 02:15:49.236658       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0229 02:15:49.372072       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0229 02:15:49.372116       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0229 02:15:49.403666       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0229 02:15:49.403942       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0229 02:15:49.418593       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0229 02:15:49.418838       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0229 02:15:49.492335       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0229 02:15:49.492758       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0229 02:15:49.585577       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0229 02:15:49.585986       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0229 02:15:52.113114       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 29 02:19:52 multinode-314500 kubelet[2651]: E0229 02:19:52.340057    2651 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 02:19:52 multinode-314500 kubelet[2651]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 02:19:52 multinode-314500 kubelet[2651]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 02:19:52 multinode-314500 kubelet[2651]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 02:19:52 multinode-314500 kubelet[2651]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 02:20:52 multinode-314500 kubelet[2651]: E0229 02:20:52.341469    2651 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 02:20:52 multinode-314500 kubelet[2651]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 02:20:52 multinode-314500 kubelet[2651]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 02:20:52 multinode-314500 kubelet[2651]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 02:20:52 multinode-314500 kubelet[2651]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 02:21:52 multinode-314500 kubelet[2651]: E0229 02:21:52.340999    2651 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 02:21:52 multinode-314500 kubelet[2651]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 02:21:52 multinode-314500 kubelet[2651]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 02:21:52 multinode-314500 kubelet[2651]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 02:21:52 multinode-314500 kubelet[2651]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 02:22:52 multinode-314500 kubelet[2651]: E0229 02:22:52.343746    2651 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 02:22:52 multinode-314500 kubelet[2651]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 02:22:52 multinode-314500 kubelet[2651]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 02:22:52 multinode-314500 kubelet[2651]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 02:22:52 multinode-314500 kubelet[2651]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 02:23:52 multinode-314500 kubelet[2651]: E0229 02:23:52.340668    2651 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 02:23:52 multinode-314500 kubelet[2651]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 02:23:52 multinode-314500 kubelet[2651]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 02:23:52 multinode-314500 kubelet[2651]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 02:23:52 multinode-314500 kubelet[2651]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 02:24:08.992534     580 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-314500 -n multinode-314500
E0229 02:24:28.796447    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-314500 -n multinode-314500: (11.336944s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-314500 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/AddNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/AddNode (232.89s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (65.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-314500 status --output json --alsologtostderr
multinode_test.go:174: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-314500 status --output json --alsologtostderr: exit status 2 (33.3545629s)

                                                
                                                
-- stdout --
	[{"Name":"multinode-314500","Host":"Running","Kubelet":"Running","APIServer":"Running","Kubeconfig":"Configured","Worker":false},{"Name":"multinode-314500-m02","Host":"Running","Kubelet":"Running","APIServer":"Irrelevant","Kubeconfig":"Irrelevant","Worker":true},{"Name":"multinode-314500-m03","Host":"Running","Kubelet":"Stopped","APIServer":"Irrelevant","Kubeconfig":"Irrelevant","Worker":true}]

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 02:24:36.628011    7456 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0229 02:24:36.685261    7456 out.go:291] Setting OutFile to fd 1448 ...
	I0229 02:24:36.686120    7456 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:24:36.686120    7456 out.go:304] Setting ErrFile to fd 1348...
	I0229 02:24:36.686120    7456 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:24:36.700945    7456 out.go:298] Setting JSON to true
	I0229 02:24:36.700945    7456 mustload.go:65] Loading cluster: multinode-314500
	I0229 02:24:36.700945    7456 notify.go:220] Checking for updates...
	I0229 02:24:36.701561    7456 config.go:182] Loaded profile config "multinode-314500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 02:24:36.701561    7456 status.go:255] checking status of multinode-314500 ...
	I0229 02:24:36.702789    7456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:24:38.714432    7456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:24:38.714512    7456 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:24:38.714512    7456 status.go:330] multinode-314500 host status = "Running" (err=<nil>)
	I0229 02:24:38.714602    7456 host.go:66] Checking if "multinode-314500" exists ...
	I0229 02:24:38.715379    7456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:24:40.775171    7456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:24:40.775397    7456 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:24:40.775397    7456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:24:43.143471    7456 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:24:43.143530    7456 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:24:43.143530    7456 host.go:66] Checking if "multinode-314500" exists ...
	I0229 02:24:43.153597    7456 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0229 02:24:43.153597    7456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:24:45.137750    7456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:24:45.138734    7456 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:24:45.138734    7456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:24:47.560438    7456 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:24:47.560438    7456 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:24:47.561194    7456 sshutil.go:53] new ssh client: &{IP:172.19.2.165 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\id_rsa Username:docker}
	I0229 02:24:47.662148    7456 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.5082151s)
	I0229 02:24:47.671554    7456 ssh_runner.go:195] Run: systemctl --version
	I0229 02:24:47.693884    7456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:24:47.719804    7456 kubeconfig.go:92] found "multinode-314500" server: "https://172.19.2.165:8443"
	I0229 02:24:47.719875    7456 api_server.go:166] Checking apiserver status ...
	I0229 02:24:47.728820    7456 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:24:47.772226    7456 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2018/cgroup
	W0229 02:24:47.792803    7456 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2018/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:24:47.803372    7456 ssh_runner.go:195] Run: ls
	I0229 02:24:47.810283    7456 api_server.go:253] Checking apiserver healthz at https://172.19.2.165:8443/healthz ...
	I0229 02:24:47.816398    7456 api_server.go:279] https://172.19.2.165:8443/healthz returned 200:
	ok
	I0229 02:24:47.816398    7456 status.go:421] multinode-314500 apiserver status = Running (err=<nil>)
	I0229 02:24:47.817409    7456 status.go:257] multinode-314500 status: &{Name:multinode-314500 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0229 02:24:47.817409    7456 status.go:255] checking status of multinode-314500-m02 ...
	I0229 02:24:47.817480    7456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:24:49.848239    7456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:24:49.848239    7456 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:24:49.848239    7456 status.go:330] multinode-314500-m02 host status = "Running" (err=<nil>)
	I0229 02:24:49.848239    7456 host.go:66] Checking if "multinode-314500-m02" exists ...
	I0229 02:24:49.848824    7456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:24:51.845890    7456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:24:51.845890    7456 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:24:51.846539    7456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:24:54.247576    7456 main.go:141] libmachine: [stdout =====>] : 172.19.5.202
	
	I0229 02:24:54.247576    7456 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:24:54.247576    7456 host.go:66] Checking if "multinode-314500-m02" exists ...
	I0229 02:24:54.258848    7456 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0229 02:24:54.258848    7456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:24:56.273057    7456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:24:56.273057    7456 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:24:56.273191    7456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:24:58.625697    7456 main.go:141] libmachine: [stdout =====>] : 172.19.5.202
	
	I0229 02:24:58.625773    7456 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:24:58.626215    7456 sshutil.go:53] new ssh client: &{IP:172.19.5.202 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m02\id_rsa Username:docker}
	I0229 02:24:58.727150    7456 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.4680535s)
	I0229 02:24:58.736200    7456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:24:58.761964    7456 status.go:257] multinode-314500-m02 status: &{Name:multinode-314500-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0229 02:24:58.761964    7456 status.go:255] checking status of multinode-314500-m03 ...
	I0229 02:24:58.763112    7456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m03 ).state
	I0229 02:25:00.738662    7456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:25:00.738662    7456 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:25:00.738662    7456 status.go:330] multinode-314500-m03 host status = "Running" (err=<nil>)
	I0229 02:25:00.738662    7456 host.go:66] Checking if "multinode-314500-m03" exists ...
	I0229 02:25:00.739396    7456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m03 ).state
	I0229 02:25:02.753385    7456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:25:02.754008    7456 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:25:02.754087    7456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 02:25:05.213521    7456 main.go:141] libmachine: [stdout =====>] : 172.19.12.66
	
	I0229 02:25:05.213521    7456 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:25:05.214456    7456 host.go:66] Checking if "multinode-314500-m03" exists ...
	I0229 02:25:05.224032    7456 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0229 02:25:05.224032    7456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m03 ).state
	I0229 02:25:07.248900    7456 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:25:07.248900    7456 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:25:07.249133    7456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 02:25:09.700107    7456 main.go:141] libmachine: [stdout =====>] : 172.19.12.66
	
	I0229 02:25:09.700107    7456 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:25:09.700984    7456 sshutil.go:53] new ssh client: &{IP:172.19.12.66 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m03\id_rsa Username:docker}
	I0229 02:25:09.798604    7456 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.5743181s)
	I0229 02:25:09.807735    7456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:25:09.832878    7456 status.go:257] multinode-314500-m03 status: &{Name:multinode-314500-m03 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:176: failed to run minikube status. args "out/minikube-windows-amd64.exe -p multinode-314500 status --output json --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-314500 -n multinode-314500
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-314500 -n multinode-314500: (11.282835s)
helpers_test.go:244: <<< TestMultiNode/serial/CopyFile FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/CopyFile]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-314500 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-314500 logs -n 25: (7.7586341s)
helpers_test.go:252: TestMultiNode/serial/CopyFile logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| delete  | -p mount-start-1-141600                           | mount-start-1-141600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:09 UTC | 29 Feb 24 02:10 UTC |
	|         | --alsologtostderr -v=5                            |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-141600 ssh -- ls                    | mount-start-2-141600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:10 UTC | 29 Feb 24 02:10 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| stop    | -p mount-start-2-141600                           | mount-start-2-141600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:10 UTC | 29 Feb 24 02:10 UTC |
	| start   | -p mount-start-2-141600                           | mount-start-2-141600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:10 UTC | 29 Feb 24 02:12 UTC |
	| mount   | C:\Users\jenkins.minikube5:/minikube-host         | mount-start-2-141600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:12 UTC |                     |
	|         | --profile mount-start-2-141600 --v 0              |                      |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid         |                      |                   |         |                     |                     |
	|         |                                                 0 |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-141600 ssh -- ls                    | mount-start-2-141600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:12 UTC | 29 Feb 24 02:12 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-2-141600                           | mount-start-2-141600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:12 UTC | 29 Feb 24 02:12 UTC |
	| delete  | -p mount-start-1-141600                           | mount-start-1-141600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:12 UTC | 29 Feb 24 02:12 UTC |
	| start   | -p multinode-314500                               | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:13 UTC | 29 Feb 24 02:19 UTC |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- apply -f                   | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- rollout                    | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- get pods -o                | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- get pods -o                | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- exec                       | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | busybox-5b5d89c9d6-826w2 --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- exec                       | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | busybox-5b5d89c9d6-qcblm --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- exec                       | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | busybox-5b5d89c9d6-826w2 --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- exec                       | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | busybox-5b5d89c9d6-qcblm --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- exec                       | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | busybox-5b5d89c9d6-826w2 -- nslookup              |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- exec                       | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | busybox-5b5d89c9d6-qcblm -- nslookup              |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- get pods -o                | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- exec                       | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | busybox-5b5d89c9d6-826w2                          |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- exec                       | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC |                     |
	|         | busybox-5b5d89c9d6-826w2 -- sh                    |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.19.0.1                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- exec                       | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | busybox-5b5d89c9d6-qcblm                          |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- exec                       | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC |                     |
	|         | busybox-5b5d89c9d6-qcblm -- sh                    |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.19.0.1                           |                      |                   |         |                     |                     |
	| node    | add -p multinode-314500 -v 3                      | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:20 UTC |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 02:13:00
	Running on machine: minikube5
	Binary: Built with gc go1.22.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 02:13:00.149906    8584 out.go:291] Setting OutFile to fd 1312 ...
	I0229 02:13:00.150227    8584 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:13:00.150227    8584 out.go:304] Setting ErrFile to fd 1328...
	I0229 02:13:00.150227    8584 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:13:00.171700    8584 out.go:298] Setting JSON to false
	I0229 02:13:00.175741    8584 start.go:129] hostinfo: {"hostname":"minikube5","uptime":269007,"bootTime":1708903773,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0229 02:13:00.175741    8584 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 02:13:00.177046    8584 out.go:177] * [multinode-314500] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0229 02:13:00.177046    8584 notify.go:220] Checking for updates...
	I0229 02:13:00.178097    8584 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 02:13:00.178485    8584 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 02:13:00.178485    8584 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0229 02:13:00.179850    8584 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 02:13:00.180273    8584 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 02:13:00.181791    8584 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 02:13:05.205228    8584 out.go:177] * Using the hyperv driver based on user configuration
	I0229 02:13:05.206271    8584 start.go:299] selected driver: hyperv
	I0229 02:13:05.206271    8584 start.go:903] validating driver "hyperv" against <nil>
	I0229 02:13:05.206359    8584 start.go:914] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 02:13:05.251841    8584 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 02:13:05.252685    8584 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 02:13:05.252685    8584 cni.go:84] Creating CNI manager for ""
	I0229 02:13:05.252685    8584 cni.go:136] 0 nodes found, recommending kindnet
	I0229 02:13:05.252685    8584 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0229 02:13:05.252685    8584 start_flags.go:323] config:
	{Name:multinode-314500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-314500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:13:05.253940    8584 iso.go:125] acquiring lock: {Name:mk91f2ee29fbed5605669750e8cfa308a1229357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:13:05.255538    8584 out.go:177] * Starting control plane node multinode-314500 in cluster multinode-314500
	I0229 02:13:05.256114    8584 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 02:13:05.256302    8584 preload.go:148] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0229 02:13:05.256344    8584 cache.go:56] Caching tarball of preloaded images
	I0229 02:13:05.256572    8584 preload.go:174] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 02:13:05.256572    8584 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0229 02:13:05.257361    8584 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\config.json ...
	I0229 02:13:05.257455    8584 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\config.json: {Name:mkd3169e69638735699adbb2ff8489bce372cb2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:13:05.258503    8584 start.go:365] acquiring machines lock for multinode-314500: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 02:13:05.258691    8584 start.go:369] acquired machines lock for "multinode-314500" in 152µs
	I0229 02:13:05.258871    8584 start.go:93] Provisioning new machine with config: &{Name:multinode-314500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-314500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 02:13:05.258976    8584 start.go:125] createHost starting for "" (driver="hyperv")
	I0229 02:13:05.259751    8584 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0229 02:13:05.259891    8584 start.go:159] libmachine.API.Create for "multinode-314500" (driver="hyperv")
	I0229 02:13:05.259891    8584 client.go:168] LocalClient.Create starting
	I0229 02:13:05.260497    8584 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0229 02:13:05.260497    8584 main.go:141] libmachine: Decoding PEM data...
	I0229 02:13:05.260497    8584 main.go:141] libmachine: Parsing certificate...
	I0229 02:13:05.260497    8584 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0229 02:13:05.261186    8584 main.go:141] libmachine: Decoding PEM data...
	I0229 02:13:05.261186    8584 main.go:141] libmachine: Parsing certificate...
	I0229 02:13:05.261186    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0229 02:13:07.286347    8584 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0229 02:13:07.286422    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:07.286509    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0229 02:13:08.976234    8584 main.go:141] libmachine: [stdout =====>] : False
	
	I0229 02:13:08.976234    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:08.976234    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0229 02:13:10.405564    8584 main.go:141] libmachine: [stdout =====>] : True
	
	I0229 02:13:10.405718    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:10.405718    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0229 02:13:13.896897    8584 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0229 02:13:13.896976    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:13.899798    8584 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0229 02:13:14.290871    8584 main.go:141] libmachine: Creating SSH key...
	I0229 02:13:14.527065    8584 main.go:141] libmachine: Creating VM...
	I0229 02:13:14.527065    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0229 02:13:17.265891    8584 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0229 02:13:17.266097    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:17.266097    8584 main.go:141] libmachine: Using switch "Default Switch"
	I0229 02:13:17.266238    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0229 02:13:18.963078    8584 main.go:141] libmachine: [stdout =====>] : True
	
	I0229 02:13:18.963078    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:18.963078    8584 main.go:141] libmachine: Creating VHD
	I0229 02:13:18.964222    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\fixed.vhd' -SizeBytes 10MB -Fixed
	I0229 02:13:22.594784    8584 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 884B5862-3469-4CFD-B182-8E081E737039
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0229 02:13:22.594784    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:22.594784    8584 main.go:141] libmachine: Writing magic tar header
	I0229 02:13:22.594784    8584 main.go:141] libmachine: Writing SSH key tar header
	I0229 02:13:22.604709    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\disk.vhd' -VHDType Dynamic -DeleteSource
	I0229 02:13:25.650762    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:13:25.650762    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:25.650762    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\disk.vhd' -SizeBytes 20000MB
	I0229 02:13:28.088594    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:13:28.088773    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:28.088918    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-314500 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0229 02:13:31.464130    8584 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-314500 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0229 02:13:31.464130    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:31.464846    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-314500 -DynamicMemoryEnabled $false
	I0229 02:13:33.602734    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:13:33.602734    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:33.602734    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-314500 -Count 2
	I0229 02:13:35.681481    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:13:35.682414    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:35.682502    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-314500 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\boot2docker.iso'
	I0229 02:13:38.162637    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:13:38.162637    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:38.163401    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-314500 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\disk.vhd'
	I0229 02:13:40.645938    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:13:40.646015    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:40.646015    8584 main.go:141] libmachine: Starting VM...
	I0229 02:13:40.646015    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-314500
	I0229 02:13:43.355580    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:13:43.355580    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:43.355580    8584 main.go:141] libmachine: Waiting for host to start...
	I0229 02:13:43.355580    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:13:45.477300    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:13:45.477397    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:45.477397    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:13:47.817639    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:13:47.817639    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:48.829666    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:13:50.912195    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:13:50.912241    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:50.912370    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:13:53.314227    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:13:53.314300    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:54.326584    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:13:56.402395    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:13:56.403080    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:56.403237    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:13:58.748206    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:13:58.748429    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:13:59.750928    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:14:01.825704    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:14:01.825704    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:01.826435    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:14:04.171500    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:14:04.171557    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:05.181274    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:14:07.245329    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:14:07.245623    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:07.245781    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:14:09.720669    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:14:09.720669    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:09.721021    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:14:11.754505    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:14:11.755426    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:11.755426    8584 machine.go:88] provisioning docker machine ...
	I0229 02:14:11.755516    8584 buildroot.go:166] provisioning hostname "multinode-314500"
	I0229 02:14:11.755562    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:14:13.804208    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:14:13.804208    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:13.804335    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:14:16.247231    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:14:16.248239    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:16.254331    8584 main.go:141] libmachine: Using SSH client type: native
	I0229 02:14:16.267585    8584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.2.165 22 <nil> <nil>}
	I0229 02:14:16.267585    8584 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-314500 && echo "multinode-314500" | sudo tee /etc/hostname
	I0229 02:14:16.424392    8584 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-314500
	
	I0229 02:14:16.424516    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:14:18.448299    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:14:18.448299    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:18.448830    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:14:20.858056    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:14:20.858056    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:20.863979    8584 main.go:141] libmachine: Using SSH client type: native
	I0229 02:14:20.864174    8584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.2.165 22 <nil> <nil>}
	I0229 02:14:20.864174    8584 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-314500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-314500/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-314500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:14:21.010675    8584 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:14:21.010763    8584 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0229 02:14:21.010763    8584 buildroot.go:174] setting up certificates
	I0229 02:14:21.010852    8584 provision.go:83] configureAuth start
	I0229 02:14:21.011112    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:14:22.998181    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:14:22.998447    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:22.998552    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:14:25.432573    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:14:25.432573    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:25.433124    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:14:27.425883    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:14:27.426494    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:27.426494    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:14:29.833478    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:14:29.833478    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:29.833478    8584 provision.go:138] copyHostCerts
	I0229 02:14:29.834264    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0229 02:14:29.834264    8584 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0229 02:14:29.834264    8584 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0229 02:14:29.834791    8584 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0229 02:14:29.835948    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0229 02:14:29.836088    8584 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0229 02:14:29.836088    8584 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0229 02:14:29.836088    8584 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0229 02:14:29.837182    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0229 02:14:29.837305    8584 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0229 02:14:29.837396    8584 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0229 02:14:29.837627    8584 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1675 bytes)
	I0229 02:14:29.838481    8584 provision.go:112] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-314500 san=[172.19.2.165 172.19.2.165 localhost 127.0.0.1 minikube multinode-314500]
	I0229 02:14:29.990342    8584 provision.go:172] copyRemoteCerts
	I0229 02:14:29.998349    8584 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:14:29.999347    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:14:32.015676    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:14:32.015676    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:32.016407    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:14:34.434860    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:14:34.435751    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:34.435751    8584 sshutil.go:53] new ssh client: &{IP:172.19.2.165 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\id_rsa Username:docker}
	I0229 02:14:34.540272    8584 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5416689s)
	I0229 02:14:34.540378    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0229 02:14:34.540655    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 02:14:34.589037    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0229 02:14:34.589037    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1224 bytes)
	I0229 02:14:34.637988    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0229 02:14:34.638288    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 02:14:34.684997    8584 provision.go:86] duration metric: configureAuth took 13.6732738s
	I0229 02:14:34.684997    8584 buildroot.go:189] setting minikube options for container-runtime
	I0229 02:14:34.685957    8584 config.go:182] Loaded profile config "multinode-314500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 02:14:34.685957    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:14:36.732569    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:14:36.732569    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:36.732893    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:14:39.171929    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:14:39.171986    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:39.176641    8584 main.go:141] libmachine: Using SSH client type: native
	I0229 02:14:39.177166    8584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.2.165 22 <nil> <nil>}
	I0229 02:14:39.177237    8584 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 02:14:39.296794    8584 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0229 02:14:39.296888    8584 buildroot.go:70] root file system type: tmpfs
	I0229 02:14:39.296957    8584 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 02:14:39.296957    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:14:41.315910    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:14:41.315910    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:41.315910    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:14:43.719853    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:14:43.720852    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:43.725258    8584 main.go:141] libmachine: Using SSH client type: native
	I0229 02:14:43.725666    8584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.2.165 22 <nil> <nil>}
	I0229 02:14:43.725666    8584 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 02:14:43.881883    8584 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 02:14:43.882199    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:14:45.916519    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:14:45.916519    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:45.917559    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:14:48.351202    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:14:48.351586    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:48.356595    8584 main.go:141] libmachine: Using SSH client type: native
	I0229 02:14:48.356668    8584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.2.165 22 <nil> <nil>}
	I0229 02:14:48.356668    8584 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 02:14:49.392262    8584 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0229 02:14:49.392262    8584 machine.go:91] provisioned docker machine in 37.6347323s
	I0229 02:14:49.392262    8584 client.go:171] LocalClient.Create took 1m44.1265457s
	I0229 02:14:49.392262    8584 start.go:167] duration metric: libmachine.API.Create for "multinode-314500" took 1m44.1265457s
	I0229 02:14:49.392262    8584 start.go:300] post-start starting for "multinode-314500" (driver="hyperv")
	I0229 02:14:49.393258    8584 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:14:49.402259    8584 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:14:49.402259    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:14:51.395389    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:14:51.395616    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:51.395690    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:14:53.788270    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:14:53.788752    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:53.789362    8584 sshutil.go:53] new ssh client: &{IP:172.19.2.165 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\id_rsa Username:docker}
	I0229 02:14:53.893141    8584 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.490524s)
	I0229 02:14:53.905375    8584 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:14:53.912851    8584 command_runner.go:130] > NAME=Buildroot
	I0229 02:14:53.912851    8584 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0229 02:14:53.912851    8584 command_runner.go:130] > ID=buildroot
	I0229 02:14:53.912851    8584 command_runner.go:130] > VERSION_ID=2023.02.9
	I0229 02:14:53.912851    8584 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0229 02:14:53.912851    8584 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 02:14:53.912851    8584 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0229 02:14:53.913631    8584 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0229 02:14:53.914277    8584 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem -> 33122.pem in /etc/ssl/certs
	I0229 02:14:53.914277    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem -> /etc/ssl/certs/33122.pem
	I0229 02:14:53.923918    8584 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 02:14:53.943567    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem --> /etc/ssl/certs/33122.pem (1708 bytes)
	I0229 02:14:53.989666    8584 start.go:303] post-start completed in 4.5952349s
	I0229 02:14:53.991784    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:14:55.999148    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:14:55.999350    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:55.999350    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:14:58.385355    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:14:58.385355    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:14:58.385948    8584 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\config.json ...
	I0229 02:14:58.389663    8584 start.go:128] duration metric: createHost completed in 1m53.1242572s
	I0229 02:14:58.389764    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:15:00.365905    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:15:00.365905    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:15:00.365905    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:15:02.777961    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:15:02.777961    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:15:02.782646    8584 main.go:141] libmachine: Using SSH client type: native
	I0229 02:15:02.783280    8584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.2.165 22 <nil> <nil>}
	I0229 02:15:02.783280    8584 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 02:15:02.899664    8584 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709172903.069532857
	
	I0229 02:15:02.899664    8584 fix.go:206] guest clock: 1709172903.069532857
	I0229 02:15:02.899664    8584 fix.go:219] Guest: 2024-02-29 02:15:03.069532857 +0000 UTC Remote: 2024-02-29 02:14:58.3896639 +0000 UTC m=+118.373915301 (delta=4.679868957s)
	I0229 02:15:02.899873    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:15:04.946764    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:15:04.946764    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:15:04.946764    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:15:07.386956    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:15:07.386956    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:15:07.391193    8584 main.go:141] libmachine: Using SSH client type: native
	I0229 02:15:07.391193    8584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.2.165 22 <nil> <nil>}
	I0229 02:15:07.391193    8584 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709172902
	I0229 02:15:07.538124    8584 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Feb 29 02:15:02 UTC 2024
	
	I0229 02:15:07.538124    8584 fix.go:226] clock set: Thu Feb 29 02:15:02 UTC 2024
	 (err=<nil>)
	I0229 02:15:07.538124    8584 start.go:83] releasing machines lock for "multinode-314500", held for 2m2.2725929s
	I0229 02:15:07.538124    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:15:09.578277    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:15:09.578277    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:15:09.578477    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:15:12.017474    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:15:12.017474    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:15:12.020803    8584 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:15:12.020938    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:15:12.028085    8584 ssh_runner.go:195] Run: cat /version.json
	I0229 02:15:12.028085    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:15:14.106976    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:15:14.106976    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:15:14.107962    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:15:14.108048    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:15:14.108166    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:15:14.108210    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:15:16.599162    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:15:16.599162    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:15:16.599717    8584 sshutil.go:53] new ssh client: &{IP:172.19.2.165 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\id_rsa Username:docker}
	I0229 02:15:16.624118    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:15:16.624199    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:15:16.624505    8584 sshutil.go:53] new ssh client: &{IP:172.19.2.165 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\id_rsa Username:docker}
	I0229 02:15:16.878087    8584 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0229 02:15:16.878258    8584 command_runner.go:130] > {"iso_version": "v1.32.1-1708638130-18020", "kicbase_version": "v0.0.42-1708008208-17936", "minikube_version": "v1.32.0", "commit": "d80143d2abd5a004b09b48bbc118a104326900af"}
	I0229 02:15:16.878258    8584 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.8570973s)
	I0229 02:15:16.878258    8584 ssh_runner.go:235] Completed: cat /version.json: (4.8499018s)
	I0229 02:15:16.891953    8584 ssh_runner.go:195] Run: systemctl --version
	I0229 02:15:16.901191    8584 command_runner.go:130] > systemd 252 (252)
	I0229 02:15:16.901288    8584 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0229 02:15:16.911194    8584 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0229 02:15:16.920182    8584 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0229 02:15:16.920182    8584 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 02:15:16.929614    8584 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:15:16.958720    8584 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0229 02:15:16.958791    8584 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 02:15:16.958791    8584 start.go:475] detecting cgroup driver to use...
	I0229 02:15:16.958791    8584 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:15:16.993577    8584 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0229 02:15:17.006166    8584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0229 02:15:17.036528    8584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 02:15:17.056400    8584 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 02:15:17.066084    8584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 02:15:17.094368    8584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 02:15:17.125650    8584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 02:15:17.155407    8584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 02:15:17.184091    8584 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:15:17.211981    8584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 02:15:17.240589    8584 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:15:17.258992    8584 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0229 02:15:17.271051    8584 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:15:17.301079    8584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:15:17.510984    8584 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 02:15:17.540848    8584 start.go:475] detecting cgroup driver to use...
	I0229 02:15:17.549602    8584 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 02:15:17.574482    8584 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0229 02:15:17.574482    8584 command_runner.go:130] > [Unit]
	I0229 02:15:17.574482    8584 command_runner.go:130] > Description=Docker Application Container Engine
	I0229 02:15:17.574482    8584 command_runner.go:130] > Documentation=https://docs.docker.com
	I0229 02:15:17.574482    8584 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0229 02:15:17.574482    8584 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0229 02:15:17.574482    8584 command_runner.go:130] > StartLimitBurst=3
	I0229 02:15:17.574482    8584 command_runner.go:130] > StartLimitIntervalSec=60
	I0229 02:15:17.574482    8584 command_runner.go:130] > [Service]
	I0229 02:15:17.574482    8584 command_runner.go:130] > Type=notify
	I0229 02:15:17.574482    8584 command_runner.go:130] > Restart=on-failure
	I0229 02:15:17.574482    8584 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0229 02:15:17.574482    8584 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0229 02:15:17.574482    8584 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0229 02:15:17.574482    8584 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0229 02:15:17.574482    8584 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0229 02:15:17.574482    8584 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0229 02:15:17.574482    8584 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0229 02:15:17.574482    8584 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0229 02:15:17.574482    8584 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0229 02:15:17.574482    8584 command_runner.go:130] > ExecStart=
	I0229 02:15:17.574482    8584 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0229 02:15:17.574482    8584 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0229 02:15:17.574482    8584 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0229 02:15:17.574482    8584 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0229 02:15:17.574482    8584 command_runner.go:130] > LimitNOFILE=infinity
	I0229 02:15:17.574482    8584 command_runner.go:130] > LimitNPROC=infinity
	I0229 02:15:17.574482    8584 command_runner.go:130] > LimitCORE=infinity
	I0229 02:15:17.574482    8584 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0229 02:15:17.574482    8584 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0229 02:15:17.574482    8584 command_runner.go:130] > TasksMax=infinity
	I0229 02:15:17.574482    8584 command_runner.go:130] > TimeoutStartSec=0
	I0229 02:15:17.574482    8584 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0229 02:15:17.574482    8584 command_runner.go:130] > Delegate=yes
	I0229 02:15:17.574482    8584 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0229 02:15:17.574482    8584 command_runner.go:130] > KillMode=process
	I0229 02:15:17.574482    8584 command_runner.go:130] > [Install]
	I0229 02:15:17.574482    8584 command_runner.go:130] > WantedBy=multi-user.target
	I0229 02:15:17.584629    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:15:17.616355    8584 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 02:15:17.657950    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:15:17.693651    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 02:15:17.729096    8584 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 02:15:17.784099    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 02:15:17.808125    8584 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:15:17.842233    8584 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0229 02:15:17.851465    8584 ssh_runner.go:195] Run: which cri-dockerd
	I0229 02:15:17.862101    8584 command_runner.go:130] > /usr/bin/cri-dockerd
	I0229 02:15:17.871161    8584 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 02:15:17.889692    8584 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0229 02:15:17.933551    8584 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 02:15:18.134287    8584 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 02:15:18.310331    8584 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 02:15:18.310331    8584 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 02:15:18.357955    8584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:15:18.552365    8584 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 02:15:20.070091    8584 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5176409s)
	I0229 02:15:20.081202    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0229 02:15:20.122115    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 02:15:20.159070    8584 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0229 02:15:20.360745    8584 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0229 02:15:20.562103    8584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:15:20.747807    8584 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0229 02:15:20.790021    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 02:15:20.823798    8584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:15:21.024568    8584 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0229 02:15:21.124460    8584 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0229 02:15:21.138536    8584 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0229 02:15:21.147715    8584 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0229 02:15:21.147715    8584 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0229 02:15:21.147715    8584 command_runner.go:130] > Device: 0,22	Inode: 889         Links: 1
	I0229 02:15:21.147715    8584 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0229 02:15:21.147715    8584 command_runner.go:130] > Access: 2024-02-29 02:15:21.219763442 +0000
	I0229 02:15:21.147715    8584 command_runner.go:130] > Modify: 2024-02-29 02:15:21.219763442 +0000
	I0229 02:15:21.147715    8584 command_runner.go:130] > Change: 2024-02-29 02:15:21.223763631 +0000
	I0229 02:15:21.147715    8584 command_runner.go:130] >  Birth: -
	I0229 02:15:21.147715    8584 start.go:543] Will wait 60s for crictl version
	I0229 02:15:21.160607    8584 ssh_runner.go:195] Run: which crictl
	I0229 02:15:21.166613    8584 command_runner.go:130] > /usr/bin/crictl
	I0229 02:15:21.175685    8584 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 02:15:21.243995    8584 command_runner.go:130] > Version:  0.1.0
	I0229 02:15:21.244098    8584 command_runner.go:130] > RuntimeName:  docker
	I0229 02:15:21.244098    8584 command_runner.go:130] > RuntimeVersion:  24.0.7
	I0229 02:15:21.244098    8584 command_runner.go:130] > RuntimeApiVersion:  v1
	I0229 02:15:21.244098    8584 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0229 02:15:21.252876    8584 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 02:15:21.284945    8584 command_runner.go:130] > 24.0.7
	I0229 02:15:21.293857    8584 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 02:15:21.328569    8584 command_runner.go:130] > 24.0.7
	I0229 02:15:21.329772    8584 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0229 02:15:21.329981    8584 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0229 02:15:21.335723    8584 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0229 02:15:21.335723    8584 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0229 02:15:21.335723    8584 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0229 02:15:21.335830    8584 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:a6:a3:c1 Flags:up|broadcast|multicast|running}
	I0229 02:15:21.339030    8584 ip.go:210] interface addr: fe80::fc78:4865:5cac:d448/64
	I0229 02:15:21.339030    8584 ip.go:210] interface addr: 172.19.0.1/20
	I0229 02:15:21.346674    8584 ssh_runner.go:195] Run: grep 172.19.0.1	host.minikube.internal$ /etc/hosts
	I0229 02:15:21.352657    8584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:15:21.374301    8584 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 02:15:21.380708    8584 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 02:15:21.407908    8584 docker.go:685] Got preloaded images: 
	I0229 02:15:21.407908    8584 docker.go:691] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I0229 02:15:21.417190    8584 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0229 02:15:21.434433    8584 command_runner.go:139] > {"Repositories":{}}
	I0229 02:15:21.444446    8584 ssh_runner.go:195] Run: which lz4
	I0229 02:15:21.452611    8584 command_runner.go:130] > /usr/bin/lz4
	I0229 02:15:21.453860    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0229 02:15:21.463263    8584 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 02:15:21.469865    8584 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 02:15:21.470175    8584 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 02:15:21.470424    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I0229 02:15:23.210150    8584 docker.go:649] Took 1.755758 seconds to copy over tarball
	I0229 02:15:23.222182    8584 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 02:15:33.289701    8584 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (10.0669568s)
	I0229 02:15:33.289701    8584 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 02:15:33.357787    8584 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0229 02:15:33.376545    8584 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.10.1":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.9-0":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3":"sha256:73deb9a3f702532592a4167455f8
bf2e5f5d900bcc959ba2fd2d35c321de1af9"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.28.4":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.28.4":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.28.4":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021
a3a2899304398e"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.28.4":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0229 02:15:33.376717    8584 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0229 02:15:33.419432    8584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:15:33.617988    8584 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 02:15:35.620810    8584 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.0027096s)
	I0229 02:15:35.628068    8584 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 02:15:35.653067    8584 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0229 02:15:35.653067    8584 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0229 02:15:35.653067    8584 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0229 02:15:35.653067    8584 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0229 02:15:35.653067    8584 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0229 02:15:35.653067    8584 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0229 02:15:35.653067    8584 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0229 02:15:35.653067    8584 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:15:35.654344    8584 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0229 02:15:35.654416    8584 cache_images.go:84] Images are preloaded, skipping loading
	I0229 02:15:35.664071    8584 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0229 02:15:35.699171    8584 command_runner.go:130] > cgroupfs
	I0229 02:15:35.700391    8584 cni.go:84] Creating CNI manager for ""
	I0229 02:15:35.700684    8584 cni.go:136] 1 nodes found, recommending kindnet
	I0229 02:15:35.700684    8584 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 02:15:35.700770    8584 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.2.165 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-314500 NodeName:multinode-314500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.2.165"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.2.165 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 02:15:35.701130    8584 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.2.165
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-314500"
	  kubeletExtraArgs:
	    node-ip: 172.19.2.165
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.2.165"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 02:15:35.701263    8584 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-314500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.2.165
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-314500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 02:15:35.711763    8584 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 02:15:35.728898    8584 command_runner.go:130] > kubeadm
	I0229 02:15:35.728898    8584 command_runner.go:130] > kubectl
	I0229 02:15:35.728898    8584 command_runner.go:130] > kubelet
	I0229 02:15:35.728898    8584 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 02:15:35.737884    8584 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 02:15:35.754466    8584 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I0229 02:15:35.786652    8584 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 02:15:35.818096    8584 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0229 02:15:35.860377    8584 ssh_runner.go:195] Run: grep 172.19.2.165	control-plane.minikube.internal$ /etc/hosts
	I0229 02:15:35.867122    8584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.2.165	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:15:35.887430    8584 certs.go:56] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500 for IP: 172.19.2.165
	I0229 02:15:35.887430    8584 certs.go:190] acquiring lock for shared ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:15:35.888418    8584 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0229 02:15:35.888418    8584 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0229 02:15:35.889416    8584 certs.go:319] generating minikube-user signed cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\client.key
	I0229 02:15:35.889416    8584 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\client.crt with IP's: []
	I0229 02:15:36.213588    8584 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\client.crt ...
	I0229 02:15:36.213588    8584 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\client.crt: {Name:mk73b75f20ca1d2e0bec389400db48fd623b8015 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:15:36.214068    8584 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\client.key ...
	I0229 02:15:36.214068    8584 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\client.key: {Name:mkb1b1a5bd39eef2e9536007ed8aa8f214199fbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:15:36.215219    8584 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.key.3d9898f0
	I0229 02:15:36.215219    8584 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.crt.3d9898f0 with IP's: [172.19.2.165 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 02:15:36.494396    8584 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.crt.3d9898f0 ...
	I0229 02:15:36.494396    8584 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.crt.3d9898f0: {Name:mk936caf0d565f97194ec84a769f367930fe715a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:15:36.495081    8584 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.key.3d9898f0 ...
	I0229 02:15:36.496079    8584 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.key.3d9898f0: {Name:mkafd075e8297f3e248df3102b52bd4b41170a1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:15:36.496315    8584 certs.go:337] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.crt.3d9898f0 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.crt
	I0229 02:15:36.510316    8584 certs.go:341] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.key.3d9898f0 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.key
	I0229 02:15:36.510683    8584 certs.go:319] generating aggregator signed cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\proxy-client.key
	I0229 02:15:36.510683    8584 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\proxy-client.crt with IP's: []
	I0229 02:15:36.721693    8584 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\proxy-client.crt ...
	I0229 02:15:36.721693    8584 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\proxy-client.crt: {Name:mkd74b50be0a408b84b859db2dc4cdc2614195ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:15:36.723948    8584 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\proxy-client.key ...
	I0229 02:15:36.724009    8584 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\proxy-client.key: {Name:mk76464224e14bc795ee483f0f2ecb96ca808e2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:15:36.724747    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0229 02:15:36.724747    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0229 02:15:36.725273    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0229 02:15:36.735647    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0229 02:15:36.736197    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0229 02:15:36.736248    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0229 02:15:36.736248    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0229 02:15:36.736248    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0229 02:15:36.737101    8584 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3312.pem (1338 bytes)
	W0229 02:15:36.737357    8584 certs.go:433] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3312_empty.pem, impossibly tiny 0 bytes
	I0229 02:15:36.737357    8584 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0229 02:15:36.737357    8584 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0229 02:15:36.737906    8584 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0229 02:15:36.738244    8584 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0229 02:15:36.738845    8584 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem (1708 bytes)
	I0229 02:15:36.739105    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem -> /usr/share/ca-certificates/33122.pem
	I0229 02:15:36.739320    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:15:36.739481    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3312.pem -> /usr/share/ca-certificates/3312.pem
	I0229 02:15:36.740148    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 02:15:36.786597    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 02:15:36.830608    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 02:15:36.875812    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0229 02:15:36.921431    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 02:15:36.966942    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 02:15:37.013401    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 02:15:37.059070    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 02:15:37.106455    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem --> /usr/share/ca-certificates/33122.pem (1708 bytes)
	I0229 02:15:37.156672    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 02:15:37.203394    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3312.pem --> /usr/share/ca-certificates/3312.pem (1338 bytes)
	I0229 02:15:37.251707    8584 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 02:15:37.295710    8584 ssh_runner.go:195] Run: openssl version
	I0229 02:15:37.305455    8584 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0229 02:15:37.316796    8584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/33122.pem && ln -fs /usr/share/ca-certificates/33122.pem /etc/ssl/certs/33122.pem"
	I0229 02:15:37.346166    8584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/33122.pem
	I0229 02:15:37.353171    8584 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 29 00:59 /usr/share/ca-certificates/33122.pem
	I0229 02:15:37.354028    8584 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 00:59 /usr/share/ca-certificates/33122.pem
	I0229 02:15:37.362846    8584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/33122.pem
	I0229 02:15:37.373491    8584 command_runner.go:130] > 3ec20f2e
	I0229 02:15:37.385486    8584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/33122.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 02:15:37.415489    8584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 02:15:37.444489    8584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:15:37.451960    8584 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 29 00:45 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:15:37.451960    8584 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 00:45 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:15:37.460116    8584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:15:37.469671    8584 command_runner.go:130] > b5213941
	I0229 02:15:37.480093    8584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 02:15:37.508112    8584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3312.pem && ln -fs /usr/share/ca-certificates/3312.pem /etc/ssl/certs/3312.pem"
	I0229 02:15:37.535081    8584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3312.pem
	I0229 02:15:37.542076    8584 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 29 00:59 /usr/share/ca-certificates/3312.pem
	I0229 02:15:37.542657    8584 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 00:59 /usr/share/ca-certificates/3312.pem
	I0229 02:15:37.552276    8584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3312.pem
	I0229 02:15:37.561453    8584 command_runner.go:130] > 51391683
	I0229 02:15:37.570468    8584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3312.pem /etc/ssl/certs/51391683.0"
	I0229 02:15:37.599088    8584 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 02:15:37.607208    8584 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 02:15:37.607208    8584 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 02:15:37.607627    8584 kubeadm.go:404] StartCluster: {Name:multinode-314500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.28.4 ClusterName:multinode-314500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.19.2.165 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:15:37.614406    8584 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 02:15:37.651041    8584 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 02:15:37.669431    8584 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0229 02:15:37.669431    8584 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0229 02:15:37.669431    8584 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0229 02:15:37.679297    8584 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:15:37.704096    8584 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:15:37.722096    8584 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0229 02:15:37.722096    8584 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0229 02:15:37.722096    8584 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0229 02:15:37.722096    8584 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:15:37.723135    8584 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:15:37.723135    8584 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0229 02:15:38.381888    8584 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:15:38.381962    8584 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:15:51.901148    8584 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0229 02:15:51.901148    8584 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I0229 02:15:51.901148    8584 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 02:15:51.901148    8584 command_runner.go:130] > [preflight] Running pre-flight checks
	I0229 02:15:51.901731    8584 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:15:51.901731    8584 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:15:51.901836    8584 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:15:51.901836    8584 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:15:51.902556    8584 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:15:51.902556    8584 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:15:51.902691    8584 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:15:51.902691    8584 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:15:51.903567    8584 out.go:204]   - Generating certificates and keys ...
	I0229 02:15:51.903626    8584 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 02:15:51.903626    8584 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0229 02:15:51.903626    8584 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 02:15:51.903626    8584 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0229 02:15:51.904297    8584 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 02:15:51.904297    8584 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 02:15:51.904297    8584 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0229 02:15:51.904297    8584 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0229 02:15:51.904297    8584 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0229 02:15:51.904297    8584 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0229 02:15:51.904906    8584 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0229 02:15:51.904937    8584 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0229 02:15:51.905063    8584 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0229 02:15:51.905063    8584 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0229 02:15:51.905063    8584 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-314500] and IPs [172.19.2.165 127.0.0.1 ::1]
	I0229 02:15:51.905063    8584 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-314500] and IPs [172.19.2.165 127.0.0.1 ::1]
	I0229 02:15:51.905063    8584 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0229 02:15:51.905595    8584 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0229 02:15:51.905775    8584 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-314500] and IPs [172.19.2.165 127.0.0.1 ::1]
	I0229 02:15:51.905775    8584 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-314500] and IPs [172.19.2.165 127.0.0.1 ::1]
	I0229 02:15:51.905775    8584 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 02:15:51.905775    8584 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 02:15:51.906311    8584 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 02:15:51.906311    8584 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 02:15:51.906451    8584 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0229 02:15:51.906451    8584 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0229 02:15:51.906648    8584 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:15:51.906648    8584 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:15:51.906648    8584 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:15:51.906648    8584 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:15:51.906648    8584 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:15:51.906648    8584 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:15:51.907239    8584 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:15:51.907322    8584 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:15:51.907444    8584 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:15:51.907444    8584 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:15:51.907639    8584 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:15:51.907639    8584 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:15:51.907772    8584 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:15:51.907840    8584 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:15:51.908342    8584 out.go:204]   - Booting up control plane ...
	I0229 02:15:51.908342    8584 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:15:51.908342    8584 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:15:51.908868    8584 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:15:51.908868    8584 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:15:51.908983    8584 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:15:51.909056    8584 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:15:51.909179    8584 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:15:51.909179    8584 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:15:51.909179    8584 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:15:51.909179    8584 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:15:51.909179    8584 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 02:15:51.909179    8584 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0229 02:15:51.909950    8584 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:15:51.909950    8584 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:15:51.910229    8584 command_runner.go:130] > [apiclient] All control plane components are healthy after 7.507183 seconds
	I0229 02:15:51.910229    8584 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.507183 seconds
	I0229 02:15:51.910438    8584 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0229 02:15:51.910552    8584 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0229 02:15:51.910616    8584 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0229 02:15:51.910616    8584 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0229 02:15:51.910616    8584 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0229 02:15:51.911258    8584 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0229 02:15:51.911797    8584 command_runner.go:130] > [mark-control-plane] Marking the node multinode-314500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0229 02:15:51.911912    8584 kubeadm.go:322] [mark-control-plane] Marking the node multinode-314500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0229 02:15:51.911912    8584 command_runner.go:130] > [bootstrap-token] Using token: 0hv5co.fj6ugwf787q3himr
	I0229 02:15:51.911912    8584 kubeadm.go:322] [bootstrap-token] Using token: 0hv5co.fj6ugwf787q3himr
	I0229 02:15:51.912545    8584 out.go:204]   - Configuring RBAC rules ...
	I0229 02:15:51.912545    8584 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0229 02:15:51.912545    8584 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0229 02:15:51.912545    8584 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0229 02:15:51.913096    8584 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0229 02:15:51.913282    8584 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0229 02:15:51.913282    8584 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0229 02:15:51.913282    8584 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0229 02:15:51.913282    8584 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0229 02:15:51.913282    8584 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0229 02:15:51.913282    8584 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0229 02:15:51.913282    8584 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0229 02:15:51.913282    8584 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0229 02:15:51.914161    8584 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0229 02:15:51.914161    8584 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0229 02:15:51.914161    8584 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0229 02:15:51.914161    8584 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0229 02:15:51.914161    8584 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0229 02:15:51.914161    8584 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0229 02:15:51.914161    8584 kubeadm.go:322] 
	I0229 02:15:51.914161    8584 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0229 02:15:51.914161    8584 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0229 02:15:51.914161    8584 kubeadm.go:322] 
	I0229 02:15:51.914161    8584 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0229 02:15:51.914161    8584 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0229 02:15:51.914161    8584 kubeadm.go:322] 
	I0229 02:15:51.914161    8584 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0229 02:15:51.914161    8584 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0229 02:15:51.914161    8584 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0229 02:15:51.914161    8584 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0229 02:15:51.914161    8584 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0229 02:15:51.915155    8584 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0229 02:15:51.915155    8584 kubeadm.go:322] 
	I0229 02:15:51.915155    8584 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0229 02:15:51.915155    8584 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0229 02:15:51.915155    8584 kubeadm.go:322] 
	I0229 02:15:51.915155    8584 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0229 02:15:51.915155    8584 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0229 02:15:51.915155    8584 kubeadm.go:322] 
	I0229 02:15:51.915155    8584 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0229 02:15:51.915155    8584 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0229 02:15:51.915155    8584 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0229 02:15:51.915155    8584 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0229 02:15:51.915155    8584 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0229 02:15:51.915155    8584 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0229 02:15:51.915155    8584 kubeadm.go:322] 
	I0229 02:15:51.915155    8584 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0229 02:15:51.915155    8584 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0229 02:15:51.915155    8584 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0229 02:15:51.915155    8584 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0229 02:15:51.916151    8584 kubeadm.go:322] 
	I0229 02:15:51.916151    8584 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 0hv5co.fj6ugwf787q3himr \
	I0229 02:15:51.916151    8584 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 0hv5co.fj6ugwf787q3himr \
	I0229 02:15:51.916151    8584 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:9c722bf1323b6c4442b9327af3863f0d7e41785d89e27c3b473d4929b028e022 \
	I0229 02:15:51.916151    8584 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:9c722bf1323b6c4442b9327af3863f0d7e41785d89e27c3b473d4929b028e022 \
	I0229 02:15:51.916151    8584 command_runner.go:130] > 	--control-plane 
	I0229 02:15:51.916151    8584 kubeadm.go:322] 	--control-plane 
	I0229 02:15:51.916151    8584 kubeadm.go:322] 
	I0229 02:15:51.916151    8584 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0229 02:15:51.916151    8584 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0229 02:15:51.916151    8584 kubeadm.go:322] 
	I0229 02:15:51.916151    8584 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 0hv5co.fj6ugwf787q3himr \
	I0229 02:15:51.916151    8584 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 0hv5co.fj6ugwf787q3himr \
	I0229 02:15:51.917165    8584 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:9c722bf1323b6c4442b9327af3863f0d7e41785d89e27c3b473d4929b028e022 
	I0229 02:15:51.917165    8584 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:9c722bf1323b6c4442b9327af3863f0d7e41785d89e27c3b473d4929b028e022 
	I0229 02:15:51.917165    8584 cni.go:84] Creating CNI manager for ""
	I0229 02:15:51.917165    8584 cni.go:136] 1 nodes found, recommending kindnet
	I0229 02:15:51.917165    8584 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0229 02:15:51.926742    8584 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0229 02:15:51.933753    8584 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0229 02:15:51.933753    8584 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0229 02:15:51.933753    8584 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0229 02:15:51.933753    8584 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0229 02:15:51.933753    8584 command_runner.go:130] > Access: 2024-02-29 02:14:07.987005400 +0000
	I0229 02:15:51.933753    8584 command_runner.go:130] > Modify: 2024-02-23 03:39:37.000000000 +0000
	I0229 02:15:51.933753    8584 command_runner.go:130] > Change: 2024-02-29 02:13:59.368000000 +0000
	I0229 02:15:51.933753    8584 command_runner.go:130] >  Birth: -
	I0229 02:15:51.934743    8584 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0229 02:15:51.934743    8584 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0229 02:15:51.986743    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0229 02:15:53.339082    8584 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0229 02:15:53.347087    8584 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0229 02:15:53.357471    8584 command_runner.go:130] > serviceaccount/kindnet created
	I0229 02:15:53.372482    8584 command_runner.go:130] > daemonset.apps/kindnet created
	I0229 02:15:53.376817    8584 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.3899963s)
	I0229 02:15:53.376885    8584 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 02:15:53.387776    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:15:53.389804    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61 minikube.k8s.io/name=multinode-314500 minikube.k8s.io/updated_at=2024_02_29T02_15_53_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:15:53.410555    8584 command_runner.go:130] > -16
	I0229 02:15:53.410635    8584 ops.go:34] apiserver oom_adj: -16
	I0229 02:15:53.572950    8584 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0229 02:15:53.573242    8584 command_runner.go:130] > node/multinode-314500 labeled
	I0229 02:15:53.583665    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:15:53.702923    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:15:54.086498    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:15:54.213077    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:15:54.589736    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:15:54.707092    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:15:55.094365    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:15:55.219281    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:15:55.594452    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:15:55.714603    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:15:56.086985    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:15:56.210093    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:15:56.594292    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:15:56.710854    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:15:57.092717    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:15:57.202893    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:15:57.596461    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:15:57.709250    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:15:58.097022    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:15:58.207043    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:15:58.585505    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:15:58.700383    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:15:59.087317    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:15:59.199211    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:15:59.589420    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:15:59.709521    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:16:00.099207    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:16:00.248193    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:16:00.587996    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:16:00.710610    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:16:01.089490    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:16:01.210939    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:16:01.588438    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:16:01.719364    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:16:02.095606    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:16:02.219852    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:16:02.583712    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:16:02.688720    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:16:03.085804    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:16:03.198833    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:16:03.589679    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:16:03.697234    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:16:04.094021    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:16:04.277722    8584 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 02:16:04.585546    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:16:04.713527    8584 command_runner.go:130] > NAME      SECRETS   AGE
	I0229 02:16:04.713527    8584 command_runner.go:130] > default   0         0s
	I0229 02:16:04.713527    8584 kubeadm.go:1088] duration metric: took 11.3359271s to wait for elevateKubeSystemPrivileges.
	I0229 02:16:04.713527    8584 kubeadm.go:406] StartCluster complete in 27.1044579s
	I0229 02:16:04.713527    8584 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:16:04.713527    8584 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 02:16:04.714507    8584 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:16:04.716496    8584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 02:16:04.716496    8584 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 02:16:04.716496    8584 addons.go:69] Setting storage-provisioner=true in profile "multinode-314500"
	I0229 02:16:04.716496    8584 addons.go:234] Setting addon storage-provisioner=true in "multinode-314500"
	I0229 02:16:04.716496    8584 addons.go:69] Setting default-storageclass=true in profile "multinode-314500"
	I0229 02:16:04.716496    8584 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-314500"
	I0229 02:16:04.716496    8584 config.go:182] Loaded profile config "multinode-314500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 02:16:04.716496    8584 host.go:66] Checking if "multinode-314500" exists ...
	I0229 02:16:04.717509    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:16:04.718505    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:16:04.730512    8584 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 02:16:04.731520    8584 kapi.go:59] client config for multinode-314500: &rest.Config{Host:"https://172.19.2.165:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2480600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 02:16:04.732504    8584 cert_rotation.go:137] Starting client certificate rotation controller
	I0229 02:16:04.732504    8584 round_trippers.go:463] GET https://172.19.2.165:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0229 02:16:04.733522    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:04.733522    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:04.733522    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:04.749641    8584 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0229 02:16:04.750464    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:04.750464    8584 round_trippers.go:580]     Audit-Id: 9956226a-c219-49d1-8683-804ff4a7c6af
	I0229 02:16:04.750525    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:04.750525    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:04.750525    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:04.750525    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:04.750525    8584 round_trippers.go:580]     Content-Length: 291
	I0229 02:16:04.750525    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:04 GMT
	I0229 02:16:04.750525    8584 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b4cd7015-a823-43da-bf82-ae91c5678262","resourceVersion":"255","creationTimestamp":"2024-02-29T02:15:51Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0229 02:16:04.751271    8584 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b4cd7015-a823-43da-bf82-ae91c5678262","resourceVersion":"255","creationTimestamp":"2024-02-29T02:15:51Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0229 02:16:04.751368    8584 round_trippers.go:463] PUT https://172.19.2.165:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0229 02:16:04.751368    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:04.751368    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:04.751368    8584 round_trippers.go:473]     Content-Type: application/json
	I0229 02:16:04.751368    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:04.770121    8584 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0229 02:16:04.770435    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:04.770435    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:04.770435    8584 round_trippers.go:580]     Content-Length: 291
	I0229 02:16:04.770435    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:04 GMT
	I0229 02:16:04.770435    8584 round_trippers.go:580]     Audit-Id: 926adfd2-ba76-4038-9182-d6c558cc8d06
	I0229 02:16:04.770435    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:04.770518    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:04.770518    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:04.770518    8584 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b4cd7015-a823-43da-bf82-ae91c5678262","resourceVersion":"337","creationTimestamp":"2024-02-29T02:15:51Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0229 02:16:04.883459    8584 command_runner.go:130] > apiVersion: v1
	I0229 02:16:04.883862    8584 command_runner.go:130] > data:
	I0229 02:16:04.883862    8584 command_runner.go:130] >   Corefile: |
	I0229 02:16:04.884003    8584 command_runner.go:130] >     .:53 {
	I0229 02:16:04.884003    8584 command_runner.go:130] >         errors
	I0229 02:16:04.884003    8584 command_runner.go:130] >         health {
	I0229 02:16:04.884003    8584 command_runner.go:130] >            lameduck 5s
	I0229 02:16:04.884003    8584 command_runner.go:130] >         }
	I0229 02:16:04.884126    8584 command_runner.go:130] >         ready
	I0229 02:16:04.884188    8584 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0229 02:16:04.884188    8584 command_runner.go:130] >            pods insecure
	I0229 02:16:04.884188    8584 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0229 02:16:04.884188    8584 command_runner.go:130] >            ttl 30
	I0229 02:16:04.884188    8584 command_runner.go:130] >         }
	I0229 02:16:04.884188    8584 command_runner.go:130] >         prometheus :9153
	I0229 02:16:04.884188    8584 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0229 02:16:04.884188    8584 command_runner.go:130] >            max_concurrent 1000
	I0229 02:16:04.884188    8584 command_runner.go:130] >         }
	I0229 02:16:04.884188    8584 command_runner.go:130] >         cache 30
	I0229 02:16:04.884188    8584 command_runner.go:130] >         loop
	I0229 02:16:04.884188    8584 command_runner.go:130] >         reload
	I0229 02:16:04.884188    8584 command_runner.go:130] >         loadbalance
	I0229 02:16:04.884188    8584 command_runner.go:130] >     }
	I0229 02:16:04.884188    8584 command_runner.go:130] > kind: ConfigMap
	I0229 02:16:04.884188    8584 command_runner.go:130] > metadata:
	I0229 02:16:04.884188    8584 command_runner.go:130] >   creationTimestamp: "2024-02-29T02:15:51Z"
	I0229 02:16:04.884188    8584 command_runner.go:130] >   name: coredns
	I0229 02:16:04.884188    8584 command_runner.go:130] >   namespace: kube-system
	I0229 02:16:04.884188    8584 command_runner.go:130] >   resourceVersion: "251"
	I0229 02:16:04.884188    8584 command_runner.go:130] >   uid: 3fc93d17-14a4-4d49-9f77-f2cd8adceaed
	I0229 02:16:04.887987    8584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.19.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0229 02:16:05.242860    8584 round_trippers.go:463] GET https://172.19.2.165:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0229 02:16:05.242860    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:05.242860    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:05.242860    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:05.287074    8584 round_trippers.go:574] Response Status: 200 OK in 43 milliseconds
	I0229 02:16:05.287143    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:05.287143    8584 round_trippers.go:580]     Content-Length: 291
	I0229 02:16:05.287213    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:05 GMT
	I0229 02:16:05.287213    8584 round_trippers.go:580]     Audit-Id: e6e6cf94-608a-4333-ac18-3d38f86552f2
	I0229 02:16:05.287213    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:05.287213    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:05.287213    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:05.287213    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:05.289816    8584 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b4cd7015-a823-43da-bf82-ae91c5678262","resourceVersion":"367","creationTimestamp":"2024-02-29T02:15:51Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0229 02:16:05.290759    8584 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-314500" context rescaled to 1 replicas
	I0229 02:16:05.290835    8584 start.go:223] Will wait 6m0s for node &{Name: IP:172.19.2.165 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 02:16:05.291722    8584 out.go:177] * Verifying Kubernetes components...
	I0229 02:16:05.303433    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:16:05.612313    8584 command_runner.go:130] > configmap/coredns replaced
	I0229 02:16:05.617363    8584 start.go:929] {"host.minikube.internal": 172.19.0.1} host record injected into CoreDNS's ConfigMap
	I0229 02:16:05.618519    8584 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 02:16:05.619544    8584 kapi.go:59] client config for multinode-314500: &rest.Config{Host:"https://172.19.2.165:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2480600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 02:16:05.620617    8584 node_ready.go:35] waiting up to 6m0s for node "multinode-314500" to be "Ready" ...
	I0229 02:16:05.620617    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:05.620617    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:05.620617    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:05.620617    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:05.625396    8584 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:16:05.625396    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:05.625396    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:05.625396    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:05.625396    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:05.625396    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:05.625396    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:05 GMT
	I0229 02:16:05.625396    8584 round_trippers.go:580]     Audit-Id: 410524b5-ba74-4eed-b6ad-c164114a2e45
	I0229 02:16:05.626569    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:06.130951    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:06.130951    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:06.130951    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:06.130951    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:06.134758    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:06.135746    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:06.135746    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:06.135746    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:06 GMT
	I0229 02:16:06.135746    8584 round_trippers.go:580]     Audit-Id: d3921daf-0cf7-4693-9c8c-01eed6add86d
	I0229 02:16:06.135746    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:06.135746    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:06.135871    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:06.136309    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:06.622511    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:06.622511    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:06.622511    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:06.622511    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:06.628940    8584 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 02:16:06.628940    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:06.628940    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:06.628940    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:06 GMT
	I0229 02:16:06.628940    8584 round_trippers.go:580]     Audit-Id: dde4c73f-476a-4c04-8fb3-4461985f3b72
	I0229 02:16:06.628940    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:06.628940    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:06.628940    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:06.630172    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:06.883598    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:16:06.883988    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:06.885333    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:16:06.885333    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:06.886306    8584 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:16:06.886086    8584 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 02:16:06.887008    8584 kapi.go:59] client config for multinode-314500: &rest.Config{Host:"https://172.19.2.165:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2480600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 02:16:06.887171    8584 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:16:06.887245    8584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 02:16:06.887293    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:16:06.888249    8584 addons.go:234] Setting addon default-storageclass=true in "multinode-314500"
	I0229 02:16:06.888325    8584 host.go:66] Checking if "multinode-314500" exists ...
	I0229 02:16:06.888997    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:16:07.129415    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:07.129415    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:07.129415    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:07.129415    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:07.137838    8584 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 02:16:07.137912    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:07.137912    8584 round_trippers.go:580]     Audit-Id: 399d2e3f-e8cf-4920-9750-05d41b929aad
	I0229 02:16:07.137912    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:07.137912    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:07.138018    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:07.138048    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:07.138048    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:07 GMT
	I0229 02:16:07.138329    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:07.622304    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:07.622304    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:07.622304    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:07.622304    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:07.633000    8584 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0229 02:16:07.633053    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:07.633053    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:07.633123    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:07.633123    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:07.633123    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:07.633123    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:07 GMT
	I0229 02:16:07.633123    8584 round_trippers.go:580]     Audit-Id: 6c87ad14-b146-42a7-ae05-253fa6399983
	I0229 02:16:07.633497    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:07.634314    8584 node_ready.go:58] node "multinode-314500" has status "Ready":"False"
	I0229 02:16:08.129012    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:08.129128    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:08.129128    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:08.129128    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:08.133061    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:08.133061    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:08.133061    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:08.133061    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:08.133061    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:08 GMT
	I0229 02:16:08.133061    8584 round_trippers.go:580]     Audit-Id: 4486154a-148b-4852-9398-d4ef707b126a
	I0229 02:16:08.133061    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:08.133061    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:08.133587    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:08.622112    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:08.622112    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:08.622112    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:08.622112    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:08.625110    8584 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:16:08.625110    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:08.625110    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:08 GMT
	I0229 02:16:08.625110    8584 round_trippers.go:580]     Audit-Id: 9fda18cb-76a8-4b72-85bc-268e5c5ee771
	I0229 02:16:08.625110    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:08.625110    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:08.625110    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:08.625110    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:08.626110    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:09.062859    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:16:09.062859    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:09.062859    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:16:09.128069    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:16:09.128069    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:09.128168    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:09.128168    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:09.128168    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:09.128168    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:09.128282    8584 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 02:16:09.128363    8584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 02:16:09.128396    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:16:09.132486    8584 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:16:09.132486    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:09.132486    8584 round_trippers.go:580]     Audit-Id: f086feb0-3bd9-4370-9635-53e735870f89
	I0229 02:16:09.132486    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:09.132486    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:09.132486    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:09.132486    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:09.132486    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:09 GMT
	I0229 02:16:09.133491    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:09.626134    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:09.626226    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:09.626226    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:09.626226    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:09.631701    8584 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 02:16:09.631701    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:09.631701    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:09.631701    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:09.631701    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:09.631701    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:09.631701    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:09 GMT
	I0229 02:16:09.631701    8584 round_trippers.go:580]     Audit-Id: a545aa49-b83a-4003-984f-45f9fe202d60
	I0229 02:16:09.631701    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:10.130946    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:10.130946    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:10.130946    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:10.130946    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:10.134969    8584 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:16:10.135394    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:10.135394    8584 round_trippers.go:580]     Audit-Id: 1724a3d5-9143-406a-bca9-05b66a0b2969
	I0229 02:16:10.135394    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:10.135394    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:10.135394    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:10.135394    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:10.135394    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:10 GMT
	I0229 02:16:10.135694    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:10.136156    8584 node_ready.go:58] node "multinode-314500" has status "Ready":"False"
	I0229 02:16:10.622330    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:10.622330    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:10.622420    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:10.622420    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:10.625946    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:10.625946    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:10.625946    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:10 GMT
	I0229 02:16:10.625946    8584 round_trippers.go:580]     Audit-Id: 7d5dc576-023c-4d62-8b5e-1f61e1eb4c92
	I0229 02:16:10.625946    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:10.625946    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:10.625946    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:10.625946    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:10.625946    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:11.130592    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:11.130592    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:11.130686    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:11.130686    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:11.133777    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:11.134244    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:11.134244    8584 round_trippers.go:580]     Audit-Id: 6c014d3d-aaf2-4324-a394-1f4ceda7527a
	I0229 02:16:11.134244    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:11.134244    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:11.134244    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:11.134244    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:11.134244    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:11 GMT
	I0229 02:16:11.134511    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:11.279789    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:16:11.280790    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:11.280889    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:16:11.611705    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:16:11.611705    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:11.613235    8584 sshutil.go:53] new ssh client: &{IP:172.19.2.165 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\id_rsa Username:docker}
	I0229 02:16:11.622115    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:11.622115    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:11.622115    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:11.622115    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:11.626134    8584 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:16:11.626583    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:11.626583    8584 round_trippers.go:580]     Audit-Id: 7ca4c11f-3d0b-4b6a-aeae-c8176d56d748
	I0229 02:16:11.626583    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:11.626583    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:11.626583    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:11.626583    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:11.626583    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:11 GMT
	I0229 02:16:11.626743    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:11.746983    8584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:16:12.129858    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:12.129858    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:12.129858    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:12.129858    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:12.134103    8584 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:16:12.134185    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:12.134185    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:12.134185    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:12.134185    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:12 GMT
	I0229 02:16:12.134185    8584 round_trippers.go:580]     Audit-Id: e300e49e-48d6-4796-b3e3-283ceb52ba8d
	I0229 02:16:12.134185    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:12.134185    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:12.134399    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:12.424764    8584 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0229 02:16:12.424842    8584 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0229 02:16:12.424922    8584 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0229 02:16:12.425012    8584 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0229 02:16:12.425012    8584 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0229 02:16:12.425069    8584 command_runner.go:130] > pod/storage-provisioner created
	I0229 02:16:12.621581    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:12.621581    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:12.621581    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:12.621581    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:12.625839    8584 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:16:12.625917    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:12.625980    8584 round_trippers.go:580]     Audit-Id: ab822128-f5fe-4739-8fe5-bd7b6f1890e7
	I0229 02:16:12.625980    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:12.625980    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:12.625980    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:12.625980    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:12.625980    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:12 GMT
	I0229 02:16:12.626299    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:12.626886    8584 node_ready.go:58] node "multinode-314500" has status "Ready":"False"
	I0229 02:16:13.130997    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:13.130997    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:13.130997    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:13.130997    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:13.137409    8584 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 02:16:13.137482    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:13.137482    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:13.137482    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:13.137482    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:13.137482    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:13.137482    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:13 GMT
	I0229 02:16:13.137482    8584 round_trippers.go:580]     Audit-Id: ba54e846-36f6-446a-839e-4e0e3c8dba08
	I0229 02:16:13.137692    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:13.621687    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:13.621687    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:13.621687    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:13.621687    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:13.624271    8584 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:16:13.625273    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:13.625273    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:13.625273    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:13.625273    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:13.625273    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:13.625273    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:13 GMT
	I0229 02:16:13.625273    8584 round_trippers.go:580]     Audit-Id: 87a66a52-80a0-45f3-8af7-9d492d7d293b
	I0229 02:16:13.625391    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:13.739754    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:16:13.739808    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:13.739808    8584 sshutil.go:53] new ssh client: &{IP:172.19.2.165 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\id_rsa Username:docker}
	I0229 02:16:13.872755    8584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 02:16:14.123275    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:14.123367    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:14.123367    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:14.123367    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:14.126646    8584 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:16:14.126646    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:14.126646    8584 round_trippers.go:580]     Audit-Id: 19134671-8c5f-4095-b846-f6fbd46bcd0b
	I0229 02:16:14.126646    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:14.126646    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:14.126646    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:14.126646    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:14.126747    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:14 GMT
	I0229 02:16:14.127021    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:14.135079    8584 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0229 02:16:14.135079    8584 round_trippers.go:463] GET https://172.19.2.165:8443/apis/storage.k8s.io/v1/storageclasses
	I0229 02:16:14.135079    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:14.135079    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:14.135605    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:14.138653    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:14.138653    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:14.138653    8584 round_trippers.go:580]     Audit-Id: 3a1a0ba3-f2e4-4d64-b6c4-3de42a6386a0
	I0229 02:16:14.138653    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:14.138653    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:14.138653    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:14.138653    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:14.138653    8584 round_trippers.go:580]     Content-Length: 1273
	I0229 02:16:14.138653    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:14 GMT
	I0229 02:16:14.138653    8584 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"413"},"items":[{"metadata":{"name":"standard","uid":"a7ad9511-65e8-4eef-89b4-7c1b803fc689","resourceVersion":"413","creationTimestamp":"2024-02-29T02:16:14Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-02-29T02:16:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0229 02:16:14.138653    8584 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"a7ad9511-65e8-4eef-89b4-7c1b803fc689","resourceVersion":"413","creationTimestamp":"2024-02-29T02:16:14Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-02-29T02:16:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0229 02:16:14.138653    8584 round_trippers.go:463] PUT https://172.19.2.165:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0229 02:16:14.138653    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:14.138653    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:14.138653    8584 round_trippers.go:473]     Content-Type: application/json
	I0229 02:16:14.138653    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:14.143659    8584 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 02:16:14.143659    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:14.143659    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:14.143659    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:14.143659    8584 round_trippers.go:580]     Content-Length: 1220
	I0229 02:16:14.143659    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:14 GMT
	I0229 02:16:14.143659    8584 round_trippers.go:580]     Audit-Id: 0eeb2b85-2218-4fa6-a0d6-7d8e8b89a118
	I0229 02:16:14.143659    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:14.143659    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:14.143659    8584 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"a7ad9511-65e8-4eef-89b4-7c1b803fc689","resourceVersion":"413","creationTimestamp":"2024-02-29T02:16:14Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-02-29T02:16:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0229 02:16:14.144910    8584 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0229 02:16:14.144910    8584 addons.go:505] enable addons completed in 9.4278877s: enabled=[storage-provisioner default-storageclass]
	I0229 02:16:14.631487    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:14.631603    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:14.631603    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:14.631603    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:14.635120    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:14.635120    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:14.635120    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:14.635120    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:14 GMT
	I0229 02:16:14.635120    8584 round_trippers.go:580]     Audit-Id: 448a4e05-de72-4089-adc3-a0cf52036b54
	I0229 02:16:14.635120    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:14.635120    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:14.635120    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:14.635840    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:14.636842    8584 node_ready.go:58] node "multinode-314500" has status "Ready":"False"
	I0229 02:16:15.134789    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:15.134789    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:15.134789    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:15.134789    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:15.138353    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:15.138353    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:15.138353    8584 round_trippers.go:580]     Audit-Id: 0c5724bc-14bf-4e22-8b28-2eed750f5e6b
	I0229 02:16:15.138353    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:15.138353    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:15.138353    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:15.138353    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:15.138353    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:15 GMT
	I0229 02:16:15.139035    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:15.636203    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:15.636203    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:15.636203    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:15.636203    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:15.639886    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:15.639886    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:15.639886    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:15 GMT
	I0229 02:16:15.639886    8584 round_trippers.go:580]     Audit-Id: b2f94694-f112-41b9-8bba-5b0a24ebff15
	I0229 02:16:15.639886    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:15.639886    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:15.639886    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:15.639886    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:15.640603    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:16.124483    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:16.124483    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:16.124483    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:16.124483    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:16.128036    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:16.128036    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:16.128036    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:16.128036    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:16.128036    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:16.128036    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:16 GMT
	I0229 02:16:16.128036    8584 round_trippers.go:580]     Audit-Id: 65ab147a-6009-41b7-8632-6cf748b1a929
	I0229 02:16:16.128036    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:16.128774    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"354","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 02:16:16.630690    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:16.630690    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:16.630690    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:16.630690    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:16.633754    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:16.634195    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:16.634195    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:16 GMT
	I0229 02:16:16.634195    8584 round_trippers.go:580]     Audit-Id: ba8279af-ce65-46db-a113-cfbea5d58aec
	I0229 02:16:16.634195    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:16.634195    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:16.634195    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:16.634247    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:16.634530    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"416","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I0229 02:16:16.635027    8584 node_ready.go:49] node "multinode-314500" has status "Ready":"True"
	I0229 02:16:16.635027    8584 node_ready.go:38] duration metric: took 11.013794s waiting for node "multinode-314500" to be "Ready" ...
	I0229 02:16:16.635027    8584 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:16:16.635027    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods
	I0229 02:16:16.635027    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:16.635027    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:16.635027    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:16.638680    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:16.638680    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:16.638680    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:16.638680    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:16.638680    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:16.638680    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:16 GMT
	I0229 02:16:16.638680    8584 round_trippers.go:580]     Audit-Id: a971a97c-8e2b-4fb0-abd4-182b3286afda
	I0229 02:16:16.638680    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:16.639968    8584 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"422"},"items":[{"metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"420","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53932 chars]
	I0229 02:16:16.644805    8584 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace to be "Ready" ...
	I0229 02:16:16.644983    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:16:16.644983    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:16.645026    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:16.645026    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:16.649483    8584 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:16:16.649525    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:16.649525    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:16.649525    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:16 GMT
	I0229 02:16:16.649525    8584 round_trippers.go:580]     Audit-Id: 598d01c3-5e69-4f62-935f-f65a0e597752
	I0229 02:16:16.649562    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:16.649618    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:16.649618    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:16.649618    8584 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"420","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0229 02:16:16.650559    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:16.650614    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:16.650614    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:16.650614    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:16.653509    8584 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:16:16.653509    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:16.653509    8584 round_trippers.go:580]     Audit-Id: d4632fc3-b104-4774-9f8d-ad65a9b99634
	I0229 02:16:16.653509    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:16.653509    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:16.653509    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:16.653509    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:16.653509    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:16 GMT
	I0229 02:16:16.653509    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"416","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I0229 02:16:17.153751    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:16:17.153915    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:17.153915    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:17.153915    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:17.157465    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:17.157656    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:17.157656    8584 round_trippers.go:580]     Audit-Id: 58765edd-d51c-4bd1-aba2-02e7a49d9565
	I0229 02:16:17.157656    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:17.157656    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:17.157656    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:17.157656    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:17.157656    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:17 GMT
	I0229 02:16:17.157656    8584 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"420","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0229 02:16:17.159074    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:17.159074    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:17.159198    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:17.159261    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:17.165635    8584 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 02:16:17.165635    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:17.165635    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:17.165635    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:17 GMT
	I0229 02:16:17.165635    8584 round_trippers.go:580]     Audit-Id: e2d12760-48b9-4e0d-bde2-ffc401c1ae39
	I0229 02:16:17.165635    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:17.165635    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:17.165635    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:17.166245    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"416","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I0229 02:16:17.646141    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:16:17.646196    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:17.646264    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:17.646264    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:17.649568    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:17.649568    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:17.649568    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:17.649568    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:17.649568    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:17.649568    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:17 GMT
	I0229 02:16:17.649568    8584 round_trippers.go:580]     Audit-Id: 643bfd9c-db53-4709-889d-f2c3b799b531
	I0229 02:16:17.649568    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:17.649568    8584 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"435","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6282 chars]
	I0229 02:16:17.650897    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:17.650897    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:17.650950    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:17.650950    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:17.653872    8584 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:16:17.653872    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:17.653872    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:17.653969    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:17 GMT
	I0229 02:16:17.654052    8584 round_trippers.go:580]     Audit-Id: bd9d3b81-4e48-4cd4-b61c-872a7afd1012
	I0229 02:16:17.654052    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:17.654052    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:17.654083    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:17.654372    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"416","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I0229 02:16:17.654824    8584 pod_ready.go:92] pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace has status "Ready":"True"
	I0229 02:16:17.654879    8584 pod_ready.go:81] duration metric: took 1.0099842s waiting for pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace to be "Ready" ...
	I0229 02:16:17.654879    8584 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:16:17.655009    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-314500
	I0229 02:16:17.655009    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:17.655009    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:17.655009    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:17.665273    8584 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0229 02:16:17.665273    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:17.665273    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:17.665273    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:17.665273    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:17 GMT
	I0229 02:16:17.665273    8584 round_trippers.go:580]     Audit-Id: 526c5f16-2a66-45ce-8632-d0f9fa5f6ba7
	I0229 02:16:17.665273    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:17.665273    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:17.667768    8584 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-314500","namespace":"kube-system","uid":"6fc42e7c-48f9-46df-bf2f-861e0684e37f","resourceVersion":"323","creationTimestamp":"2024-02-29T02:15:52Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.2.165:2379","kubernetes.io/config.hash":"0b84e88097a2b59a9c108b0f9fa2b889","kubernetes.io/config.mirror":"0b84e88097a2b59a9c108b0f9fa2b889","kubernetes.io/config.seen":"2024-02-29T02:15:52.221392786Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:15:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5852 chars]
	I0229 02:16:17.668271    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:17.668271    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:17.668271    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:17.668271    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:17.677864    8584 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0229 02:16:17.677864    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:17.677864    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:17.677864    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:17.677864    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:17.677864    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:17.677864    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:17 GMT
	I0229 02:16:17.677864    8584 round_trippers.go:580]     Audit-Id: 4d992db8-60ef-49b3-b2e9-0703ba54de12
	I0229 02:16:17.678938    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"416","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I0229 02:16:17.678938    8584 pod_ready.go:92] pod "etcd-multinode-314500" in "kube-system" namespace has status "Ready":"True"
	I0229 02:16:17.678938    8584 pod_ready.go:81] duration metric: took 24.0576ms waiting for pod "etcd-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:16:17.678938    8584 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:16:17.679572    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-314500
	I0229 02:16:17.679572    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:17.679622    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:17.679622    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:17.683833    8584 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:16:17.683833    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:17.683833    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:17.684456    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:17.684456    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:17 GMT
	I0229 02:16:17.684456    8584 round_trippers.go:580]     Audit-Id: 6f1b85da-922b-459d-a8dc-fb211d6b23dc
	I0229 02:16:17.684456    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:17.684456    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:17.684668    8584 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-314500","namespace":"kube-system","uid":"fc266082-ff2c-4bd1-951f-11dc765a28ae","resourceVersion":"303","creationTimestamp":"2024-02-29T02:15:52Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.2.165:8443","kubernetes.io/config.hash":"75abc10fab898952206cc1d682d3c922","kubernetes.io/config.mirror":"75abc10fab898952206cc1d682d3c922","kubernetes.io/config.seen":"2024-02-29T02:15:52.221397486Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:15:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7390 chars]
	I0229 02:16:17.685312    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:17.685312    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:17.685365    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:17.685365    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:17.690438    8584 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 02:16:17.690438    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:17.690438    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:17.690438    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:17.690438    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:17 GMT
	I0229 02:16:17.690438    8584 round_trippers.go:580]     Audit-Id: f815fd6b-646c-44c0-9468-208bff1f7a45
	I0229 02:16:17.690438    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:17.690438    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:17.690823    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"416","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I0229 02:16:17.691302    8584 pod_ready.go:92] pod "kube-apiserver-multinode-314500" in "kube-system" namespace has status "Ready":"True"
	I0229 02:16:17.691364    8584 pod_ready.go:81] duration metric: took 12.4254ms waiting for pod "kube-apiserver-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:16:17.691364    8584 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:16:17.691491    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-314500
	I0229 02:16:17.691491    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:17.691491    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:17.691491    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:17.693699    8584 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:16:17.694098    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:17.694098    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:17.694098    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:17 GMT
	I0229 02:16:17.694098    8584 round_trippers.go:580]     Audit-Id: bb9e2109-665e-49c3-ac65-cbc158c70f3e
	I0229 02:16:17.694098    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:17.694195    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:17.694195    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:17.694402    8584 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-314500","namespace":"kube-system","uid":"58e57902-e113-44a9-b5b5-4aba2ba13491","resourceVersion":"302","creationTimestamp":"2024-02-29T02:15:52Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"46f4a0cce9ca64e19c1ad09d6f30ce1e","kubernetes.io/config.mirror":"46f4a0cce9ca64e19c1ad09d6f30ce1e","kubernetes.io/config.seen":"2024-02-29T02:15:52.221398986Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:15:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6965 chars]
	I0229 02:16:17.695017    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:17.695067    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:17.695067    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:17.695067    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:17.698234    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:17.698281    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:17.698281    8584 round_trippers.go:580]     Audit-Id: 148ca40f-d5fb-49be-8b8a-09cc4e3afa18
	I0229 02:16:17.698281    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:17.698281    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:17.698339    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:17.698339    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:17.698388    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:17 GMT
	I0229 02:16:17.699249    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"416","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I0229 02:16:17.699313    8584 pod_ready.go:92] pod "kube-controller-manager-multinode-314500" in "kube-system" namespace has status "Ready":"True"
	I0229 02:16:17.699313    8584 pod_ready.go:81] duration metric: took 7.8948ms waiting for pod "kube-controller-manager-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:16:17.699313    8584 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6r6j4" in "kube-system" namespace to be "Ready" ...
	I0229 02:16:17.699313    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6r6j4
	I0229 02:16:17.699313    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:17.699313    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:17.699313    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:17.702891    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:17.703633    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:17.703633    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:17 GMT
	I0229 02:16:17.703633    8584 round_trippers.go:580]     Audit-Id: 7025cb07-a461-4530-bdd7-f2453b2a2350
	I0229 02:16:17.703633    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:17.703633    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:17.703633    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:17.703633    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:17.703905    8584 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6r6j4","generateName":"kube-proxy-","namespace":"kube-system","uid":"2b84b22d-3786-4f9e-a23a-c7cfc93bb671","resourceVersion":"394","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"99934fe5-0d72-4e83-8f59-4a0b59969008","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"99934fe5-0d72-4e83-8f59-4a0b59969008\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
	I0229 02:16:17.832880    8584 request.go:629] Waited for 126.8086ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:17.832880    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:17.832880    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:17.832880    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:17.832880    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:17.836917    8584 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:16:17.836917    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:17.836917    8584 round_trippers.go:580]     Audit-Id: 52268278-8be7-4449-a4bc-d534692682ee
	I0229 02:16:17.836917    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:17.836917    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:17.836917    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:17.836917    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:17.836917    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:18 GMT
	I0229 02:16:17.837455    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"416","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I0229 02:16:17.837896    8584 pod_ready.go:92] pod "kube-proxy-6r6j4" in "kube-system" namespace has status "Ready":"True"
	I0229 02:16:17.837896    8584 pod_ready.go:81] duration metric: took 138.5747ms waiting for pod "kube-proxy-6r6j4" in "kube-system" namespace to be "Ready" ...
	I0229 02:16:17.837896    8584 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:16:18.036948    8584 request.go:629] Waited for 198.7966ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-314500
	I0229 02:16:18.037077    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-314500
	I0229 02:16:18.037077    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:18.037077    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:18.037077    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:18.040666    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:18.040666    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:18.040666    8584 round_trippers.go:580]     Audit-Id: 97ba3e81-c240-4d8f-a9e6-117a64b5672c
	I0229 02:16:18.041515    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:18.041515    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:18.041515    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:18.041515    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:18.041515    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:18 GMT
	I0229 02:16:18.041693    8584 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-314500","namespace":"kube-system","uid":"31fcecc6-17de-43a6-892d-37cd915de64b","resourceVersion":"288","creationTimestamp":"2024-02-29T02:15:52Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"3d9a79ff068a0922524863a8caa5053a","kubernetes.io/config.mirror":"3d9a79ff068a0922524863a8caa5053a","kubernetes.io/config.seen":"2024-02-29T02:15:52.221399886Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:15:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4695 chars]
	I0229 02:16:18.240929    8584 request.go:629] Waited for 198.242ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:18.241375    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:16:18.241435    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:18.241435    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:18.241435    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:18.244752    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:18.245526    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:18.245526    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:18.245611    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:18 GMT
	I0229 02:16:18.245611    8584 round_trippers.go:580]     Audit-Id: 7935c4ce-ff7f-4b35-bff9-a77da52c6dda
	I0229 02:16:18.245611    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:18.245611    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:18.245611    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:18.245611    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"416","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I0229 02:16:18.246214    8584 pod_ready.go:92] pod "kube-scheduler-multinode-314500" in "kube-system" namespace has status "Ready":"True"
	I0229 02:16:18.246214    8584 pod_ready.go:81] duration metric: took 408.2266ms waiting for pod "kube-scheduler-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:16:18.246214    8584 pod_ready.go:38] duration metric: took 1.6110974s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:16:18.246214    8584 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:16:18.257038    8584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:16:18.283407    8584 command_runner.go:130] > 2018
	I0229 02:16:18.283407    8584 api_server.go:72] duration metric: took 12.9918453s to wait for apiserver process to appear ...
	I0229 02:16:18.283407    8584 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:16:18.283407    8584 api_server.go:253] Checking apiserver healthz at https://172.19.2.165:8443/healthz ...
	I0229 02:16:18.292685    8584 api_server.go:279] https://172.19.2.165:8443/healthz returned 200:
	ok
	I0229 02:16:18.293146    8584 round_trippers.go:463] GET https://172.19.2.165:8443/version
	I0229 02:16:18.293146    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:18.293146    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:18.293146    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:18.296745    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:18.296766    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:18.296766    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:18 GMT
	I0229 02:16:18.296844    8584 round_trippers.go:580]     Audit-Id: a3568257-7ba8-46aa-906e-199f937d3cb2
	I0229 02:16:18.296844    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:18.296844    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:18.296844    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:18.296844    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:18.296844    8584 round_trippers.go:580]     Content-Length: 264
	I0229 02:16:18.296933    8584 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0229 02:16:18.297126    8584 api_server.go:141] control plane version: v1.28.4
	I0229 02:16:18.297126    8584 api_server.go:131] duration metric: took 13.7187ms to wait for apiserver health ...
	I0229 02:16:18.297126    8584 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:16:18.441150    8584 request.go:629] Waited for 143.8801ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods
	I0229 02:16:18.441150    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods
	I0229 02:16:18.441150    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:18.441150    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:18.441150    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:18.446130    8584 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:16:18.446130    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:18.446130    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:18 GMT
	I0229 02:16:18.446130    8584 round_trippers.go:580]     Audit-Id: ad8e47f3-2e6e-4c08-9bc9-672b7124a085
	I0229 02:16:18.446130    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:18.446130    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:18.446130    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:18.446130    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:18.447912    8584 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"439"},"items":[{"metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"435","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54048 chars]
	I0229 02:16:18.450435    8584 system_pods.go:59] 8 kube-system pods found
	I0229 02:16:18.450435    8584 system_pods.go:61] "coredns-5dd5756b68-8g6tg" [ef7fb259-9f24-4645-9eff-2b16f6789e1b] Running
	I0229 02:16:18.450435    8584 system_pods.go:61] "etcd-multinode-314500" [6fc42e7c-48f9-46df-bf2f-861e0684e37f] Running
	I0229 02:16:18.450435    8584 system_pods.go:61] "kindnet-t9r77" [4620d417-744c-4049-82ab-79d1ee7f047c] Running
	I0229 02:16:18.450435    8584 system_pods.go:61] "kube-apiserver-multinode-314500" [fc266082-ff2c-4bd1-951f-11dc765a28ae] Running
	I0229 02:16:18.450435    8584 system_pods.go:61] "kube-controller-manager-multinode-314500" [58e57902-e113-44a9-b5b5-4aba2ba13491] Running
	I0229 02:16:18.450435    8584 system_pods.go:61] "kube-proxy-6r6j4" [2b84b22d-3786-4f9e-a23a-c7cfc93bb671] Running
	I0229 02:16:18.450435    8584 system_pods.go:61] "kube-scheduler-multinode-314500" [31fcecc6-17de-43a6-892d-37cd915de64b] Running
	I0229 02:16:18.450435    8584 system_pods.go:61] "storage-provisioner" [9780520b-8ff9-408a-ab6f-41b63790ccd1] Running
	I0229 02:16:18.450435    8584 system_pods.go:74] duration metric: took 153.3001ms to wait for pod list to return data ...
	I0229 02:16:18.450435    8584 default_sa.go:34] waiting for default service account to be created ...
	I0229 02:16:18.641470    8584 request.go:629] Waited for 191.0243ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.165:8443/api/v1/namespaces/default/serviceaccounts
	I0229 02:16:18.641470    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/default/serviceaccounts
	I0229 02:16:18.641470    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:18.641470    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:18.641470    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:18.645874    8584 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:16:18.645874    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:18.645874    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:18 GMT
	I0229 02:16:18.645874    8584 round_trippers.go:580]     Audit-Id: 9e04f5c6-c753-4db9-b22e-07bcf383223a
	I0229 02:16:18.646835    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:18.646835    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:18.646835    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:18.646835    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:18.646835    8584 round_trippers.go:580]     Content-Length: 261
	I0229 02:16:18.646835    8584 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"439"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"a442432a-e4e1-4889-bfa8-e3967acc17f0","resourceVersion":"330","creationTimestamp":"2024-02-29T02:16:04Z"}}]}
	I0229 02:16:18.646835    8584 default_sa.go:45] found service account: "default"
	I0229 02:16:18.646835    8584 default_sa.go:55] duration metric: took 196.3895ms for default service account to be created ...
	I0229 02:16:18.646835    8584 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 02:16:18.844094    8584 request.go:629] Waited for 197.2476ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods
	I0229 02:16:18.844094    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods
	I0229 02:16:18.844094    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:18.844094    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:18.844094    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:18.848446    8584 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:16:18.848446    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:18.848446    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:18.848446    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:18.848446    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:19 GMT
	I0229 02:16:18.848446    8584 round_trippers.go:580]     Audit-Id: 5e1f0b6a-e7f1-4363-96af-41558a1cff57
	I0229 02:16:18.848446    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:18.848446    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:18.850291    8584 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"439"},"items":[{"metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"435","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54048 chars]
	I0229 02:16:18.852542    8584 system_pods.go:86] 8 kube-system pods found
	I0229 02:16:18.852542    8584 system_pods.go:89] "coredns-5dd5756b68-8g6tg" [ef7fb259-9f24-4645-9eff-2b16f6789e1b] Running
	I0229 02:16:18.852542    8584 system_pods.go:89] "etcd-multinode-314500" [6fc42e7c-48f9-46df-bf2f-861e0684e37f] Running
	I0229 02:16:18.852542    8584 system_pods.go:89] "kindnet-t9r77" [4620d417-744c-4049-82ab-79d1ee7f047c] Running
	I0229 02:16:18.852542    8584 system_pods.go:89] "kube-apiserver-multinode-314500" [fc266082-ff2c-4bd1-951f-11dc765a28ae] Running
	I0229 02:16:18.852542    8584 system_pods.go:89] "kube-controller-manager-multinode-314500" [58e57902-e113-44a9-b5b5-4aba2ba13491] Running
	I0229 02:16:18.852542    8584 system_pods.go:89] "kube-proxy-6r6j4" [2b84b22d-3786-4f9e-a23a-c7cfc93bb671] Running
	I0229 02:16:18.852542    8584 system_pods.go:89] "kube-scheduler-multinode-314500" [31fcecc6-17de-43a6-892d-37cd915de64b] Running
	I0229 02:16:18.852542    8584 system_pods.go:89] "storage-provisioner" [9780520b-8ff9-408a-ab6f-41b63790ccd1] Running
	I0229 02:16:18.852542    8584 system_pods.go:126] duration metric: took 205.6953ms to wait for k8s-apps to be running ...
	I0229 02:16:18.852542    8584 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 02:16:18.861417    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:16:18.887054    8584 system_svc.go:56] duration metric: took 34.4312ms WaitForService to wait for kubelet.
	I0229 02:16:18.887149    8584 kubeadm.go:581] duration metric: took 13.5955543s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 02:16:18.887215    8584 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:16:19.031410    8584 request.go:629] Waited for 144.1874ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.165:8443/api/v1/nodes
	I0229 02:16:19.031606    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes
	I0229 02:16:19.031606    8584 round_trippers.go:469] Request Headers:
	I0229 02:16:19.031606    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:16:19.031606    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:16:19.035104    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:16:19.035104    8584 round_trippers.go:577] Response Headers:
	I0229 02:16:19.035104    8584 round_trippers.go:580]     Audit-Id: 3e53124b-3fb7-4d71-a89e-22e59922a676
	I0229 02:16:19.035104    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:16:19.035104    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:16:19.035507    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:16:19.035507    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:16:19.035507    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:16:19 GMT
	I0229 02:16:19.035795    8584 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"439"},"items":[{"metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"416","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4834 chars]
	I0229 02:16:19.036569    8584 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:16:19.036646    8584 node_conditions.go:123] node cpu capacity is 2
	I0229 02:16:19.036646    8584 node_conditions.go:105] duration metric: took 149.4233ms to run NodePressure ...
	I0229 02:16:19.036755    8584 start.go:228] waiting for startup goroutines ...
	I0229 02:16:19.036755    8584 start.go:233] waiting for cluster config update ...
	I0229 02:16:19.036755    8584 start.go:242] writing updated cluster config ...
	I0229 02:16:19.038683    8584 out.go:177] 
	I0229 02:16:19.055810    8584 config.go:182] Loaded profile config "multinode-314500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 02:16:19.055971    8584 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\config.json ...
	I0229 02:16:19.059124    8584 out.go:177] * Starting worker node multinode-314500-m02 in cluster multinode-314500
	I0229 02:16:19.059762    8584 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 02:16:19.059762    8584 cache.go:56] Caching tarball of preloaded images
	I0229 02:16:19.060125    8584 preload.go:174] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 02:16:19.060125    8584 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0229 02:16:19.060125    8584 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\config.json ...
	I0229 02:16:19.069726    8584 start.go:365] acquiring machines lock for multinode-314500-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 02:16:19.070853    8584 start.go:369] acquired machines lock for "multinode-314500-m02" in 145.1µs
	I0229 02:16:19.071032    8584 start.go:93] Provisioning new machine with config: &{Name:multinode-314500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-314500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.19.2.165 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequ
ested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0229 02:16:19.071032    8584 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0229 02:16:19.071291    8584 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0229 02:16:19.071291    8584 start.go:159] libmachine.API.Create for "multinode-314500" (driver="hyperv")
	I0229 02:16:19.071291    8584 client.go:168] LocalClient.Create starting
	I0229 02:16:19.072518    8584 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0229 02:16:19.072841    8584 main.go:141] libmachine: Decoding PEM data...
	I0229 02:16:19.072841    8584 main.go:141] libmachine: Parsing certificate...
	I0229 02:16:19.073047    8584 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0229 02:16:19.073192    8584 main.go:141] libmachine: Decoding PEM data...
	I0229 02:16:19.073192    8584 main.go:141] libmachine: Parsing certificate...
	I0229 02:16:19.073192    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0229 02:16:20.920705    8584 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0229 02:16:20.920705    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:20.921317    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0229 02:16:22.576054    8584 main.go:141] libmachine: [stdout =====>] : False
	
	I0229 02:16:22.576118    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:22.576186    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0229 02:16:24.018073    8584 main.go:141] libmachine: [stdout =====>] : True
	
	I0229 02:16:24.018073    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:24.018073    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0229 02:16:27.519984    8584 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0229 02:16:27.521004    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:27.522825    8584 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0229 02:16:27.901527    8584 main.go:141] libmachine: Creating SSH key...
	I0229 02:16:28.097501    8584 main.go:141] libmachine: Creating VM...
	I0229 02:16:28.097501    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0229 02:16:30.904965    8584 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0229 02:16:30.905182    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:30.905182    8584 main.go:141] libmachine: Using switch "Default Switch"
	I0229 02:16:30.905182    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0229 02:16:32.604830    8584 main.go:141] libmachine: [stdout =====>] : True
	
	I0229 02:16:32.604830    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:32.604830    8584 main.go:141] libmachine: Creating VHD
	I0229 02:16:32.604937    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0229 02:16:36.234786    8584 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 68DA3A88-B6E1-46DA-93D1-804B8B5EA2B6
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0229 02:16:36.234786    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:36.234786    8584 main.go:141] libmachine: Writing magic tar header
	I0229 02:16:36.235274    8584 main.go:141] libmachine: Writing SSH key tar header
	I0229 02:16:36.244776    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0229 02:16:39.318116    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:16:39.318116    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:39.318116    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m02\disk.vhd' -SizeBytes 20000MB
	I0229 02:16:41.733381    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:16:41.733986    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:41.734091    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-314500-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0229 02:16:45.142995    8584 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-314500-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0229 02:16:45.142995    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:45.143938    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-314500-m02 -DynamicMemoryEnabled $false
	I0229 02:16:47.265484    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:16:47.265484    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:47.265616    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-314500-m02 -Count 2
	I0229 02:16:49.321416    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:16:49.321772    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:49.321890    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-314500-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m02\boot2docker.iso'
	I0229 02:16:51.771609    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:16:51.771808    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:51.771808    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-314500-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m02\disk.vhd'
	I0229 02:16:54.237843    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:16:54.238288    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:54.238288    8584 main.go:141] libmachine: Starting VM...
	I0229 02:16:54.238364    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-314500-m02
	I0229 02:16:56.948503    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:16:56.948564    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:56.948564    8584 main.go:141] libmachine: Waiting for host to start...
	I0229 02:16:56.948691    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:16:59.081484    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:16:59.081484    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:16:59.081484    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:17:01.451137    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:17:01.451137    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:02.451735    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:17:04.482600    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:17:04.482600    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:04.482600    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:17:06.855829    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:17:06.855829    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:07.863335    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:17:09.971502    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:17:09.971502    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:09.971663    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:17:12.324229    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:17:12.324333    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:13.330922    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:17:15.391366    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:17:15.391404    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:15.391404    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:17:17.718844    8584 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:17:17.718973    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:18.726464    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:17:20.785794    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:17:20.785794    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:20.785794    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:17:23.184930    8584 main.go:141] libmachine: [stdout =====>] : 172.19.5.202
	
	I0229 02:17:23.184930    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:23.185003    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:17:25.185603    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:17:25.185847    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:25.185847    8584 machine.go:88] provisioning docker machine ...
	I0229 02:17:25.185847    8584 buildroot.go:166] provisioning hostname "multinode-314500-m02"
	I0229 02:17:25.185847    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:17:27.225297    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:17:27.226441    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:27.226473    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:17:29.607904    8584 main.go:141] libmachine: [stdout =====>] : 172.19.5.202
	
	I0229 02:17:29.607904    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:29.612460    8584 main.go:141] libmachine: Using SSH client type: native
	I0229 02:17:29.622734    8584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.5.202 22 <nil> <nil>}
	I0229 02:17:29.622734    8584 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-314500-m02 && echo "multinode-314500-m02" | sudo tee /etc/hostname
	I0229 02:17:29.783303    8584 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-314500-m02
	
	I0229 02:17:29.783303    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:17:31.813172    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:17:31.813290    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:31.813290    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:17:34.232804    8584 main.go:141] libmachine: [stdout =====>] : 172.19.5.202
	
	I0229 02:17:34.233345    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:34.237405    8584 main.go:141] libmachine: Using SSH client type: native
	I0229 02:17:34.237468    8584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.5.202 22 <nil> <nil>}
	I0229 02:17:34.237468    8584 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-314500-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-314500-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-314500-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:17:34.392771    8584 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:17:34.392771    8584 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0229 02:17:34.392853    8584 buildroot.go:174] setting up certificates
	I0229 02:17:34.392853    8584 provision.go:83] configureAuth start
	I0229 02:17:34.392853    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:17:36.409714    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:17:36.409714    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:36.409926    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:17:38.862723    8584 main.go:141] libmachine: [stdout =====>] : 172.19.5.202
	
	I0229 02:17:38.862870    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:38.862870    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:17:40.858876    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:17:40.859201    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:40.859201    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:17:43.234342    8584 main.go:141] libmachine: [stdout =====>] : 172.19.5.202
	
	I0229 02:17:43.234419    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:43.234419    8584 provision.go:138] copyHostCerts
	I0229 02:17:43.234567    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0229 02:17:43.234765    8584 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0229 02:17:43.234765    8584 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0229 02:17:43.235285    8584 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0229 02:17:43.236034    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0229 02:17:43.236034    8584 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0229 02:17:43.236034    8584 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0229 02:17:43.236034    8584 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1675 bytes)
	I0229 02:17:43.236807    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0229 02:17:43.237396    8584 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0229 02:17:43.237396    8584 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0229 02:17:43.237497    8584 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0229 02:17:43.238127    8584 provision.go:112] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-314500-m02 san=[172.19.5.202 172.19.5.202 localhost 127.0.0.1 minikube multinode-314500-m02]
	I0229 02:17:43.524218    8584 provision.go:172] copyRemoteCerts
	I0229 02:17:43.533207    8584 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:17:43.533207    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:17:45.530673    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:17:45.530673    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:45.530747    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:17:47.941248    8584 main.go:141] libmachine: [stdout =====>] : 172.19.5.202
	
	I0229 02:17:47.941248    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:47.942211    8584 sshutil.go:53] new ssh client: &{IP:172.19.5.202 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m02\id_rsa Username:docker}
	I0229 02:17:48.060802    8584 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5273422s)
	I0229 02:17:48.060802    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0229 02:17:48.061398    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 02:17:48.106726    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0229 02:17:48.107259    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1237 bytes)
	I0229 02:17:48.151608    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0229 02:17:48.152143    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 02:17:48.200186    8584 provision.go:86] duration metric: configureAuth took 13.8065619s
	I0229 02:17:48.200186    8584 buildroot.go:189] setting minikube options for container-runtime
	I0229 02:17:48.200842    8584 config.go:182] Loaded profile config "multinode-314500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 02:17:48.200920    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:17:50.211498    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:17:50.211498    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:50.211498    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:17:52.592758    8584 main.go:141] libmachine: [stdout =====>] : 172.19.5.202
	
	I0229 02:17:52.592758    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:52.597792    8584 main.go:141] libmachine: Using SSH client type: native
	I0229 02:17:52.598309    8584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.5.202 22 <nil> <nil>}
	I0229 02:17:52.598381    8584 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 02:17:52.757991    8584 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0229 02:17:52.757991    8584 buildroot.go:70] root file system type: tmpfs
	I0229 02:17:52.757991    8584 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 02:17:52.758523    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:17:54.794561    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:17:54.794987    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:54.795068    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:17:57.208524    8584 main.go:141] libmachine: [stdout =====>] : 172.19.5.202
	
	I0229 02:17:57.208524    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:57.212707    8584 main.go:141] libmachine: Using SSH client type: native
	I0229 02:17:57.213061    8584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.5.202 22 <nil> <nil>}
	I0229 02:17:57.213061    8584 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.2.165"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 02:17:57.378362    8584 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.2.165
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 02:17:57.378395    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:17:59.428307    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:17:59.428307    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:17:59.428307    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:18:01.824823    8584 main.go:141] libmachine: [stdout =====>] : 172.19.5.202
	
	I0229 02:18:01.824823    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:18:01.828335    8584 main.go:141] libmachine: Using SSH client type: native
	I0229 02:18:01.828927    8584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.5.202 22 <nil> <nil>}
	I0229 02:18:01.828927    8584 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 02:18:02.863847    8584 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0229 02:18:02.863847    8584 machine.go:91] provisioned docker machine in 37.6758983s
	I0229 02:18:02.863847    8584 client.go:171] LocalClient.Create took 1m43.7867595s
	I0229 02:18:02.864958    8584 start.go:167] duration metric: libmachine.API.Create for "multinode-314500" took 1m43.78787s
	I0229 02:18:02.864958    8584 start.go:300] post-start starting for "multinode-314500-m02" (driver="hyperv")
	I0229 02:18:02.864958    8584 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:18:02.874256    8584 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:18:02.874256    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:18:04.910564    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:18:04.910633    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:18:04.910703    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:18:07.378336    8584 main.go:141] libmachine: [stdout =====>] : 172.19.5.202
	
	I0229 02:18:07.378336    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:18:07.378487    8584 sshutil.go:53] new ssh client: &{IP:172.19.5.202 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m02\id_rsa Username:docker}
	I0229 02:18:07.486010    8584 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6114972s)
	I0229 02:18:07.496984    8584 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:18:07.504935    8584 command_runner.go:130] > NAME=Buildroot
	I0229 02:18:07.504935    8584 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0229 02:18:07.504935    8584 command_runner.go:130] > ID=buildroot
	I0229 02:18:07.504935    8584 command_runner.go:130] > VERSION_ID=2023.02.9
	I0229 02:18:07.504935    8584 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0229 02:18:07.505148    8584 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 02:18:07.505148    8584 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0229 02:18:07.505545    8584 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0229 02:18:07.508348    8584 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem -> 33122.pem in /etc/ssl/certs
	I0229 02:18:07.508348    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem -> /etc/ssl/certs/33122.pem
	I0229 02:18:07.517641    8584 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 02:18:07.536722    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem --> /etc/ssl/certs/33122.pem (1708 bytes)
	I0229 02:18:07.582613    8584 start.go:303] post-start completed in 4.7173917s
	I0229 02:18:07.584757    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:18:09.616749    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:18:09.616749    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:18:09.617616    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:18:12.029126    8584 main.go:141] libmachine: [stdout =====>] : 172.19.5.202
	
	I0229 02:18:12.029126    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:18:12.029537    8584 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\config.json ...
	I0229 02:18:12.031412    8584 start.go:128] duration metric: createHost completed in 1m52.9539719s
	I0229 02:18:12.031412    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:18:14.046188    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:18:14.046538    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:18:14.046589    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:18:16.455401    8584 main.go:141] libmachine: [stdout =====>] : 172.19.5.202
	
	I0229 02:18:16.455976    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:18:16.461299    8584 main.go:141] libmachine: Using SSH client type: native
	I0229 02:18:16.461877    8584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.5.202 22 <nil> <nil>}
	I0229 02:18:16.461877    8584 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 02:18:16.593240    8584 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709173096.763630370
	
	I0229 02:18:16.593344    8584 fix.go:206] guest clock: 1709173096.763630370
	I0229 02:18:16.593344    8584 fix.go:219] Guest: 2024-02-29 02:18:16.76363037 +0000 UTC Remote: 2024-02-29 02:18:12.0314125 +0000 UTC m=+312.004845001 (delta=4.73221787s)
	I0229 02:18:16.593455    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:18:18.589352    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:18:18.589352    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:18:18.589352    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:18:21.027873    8584 main.go:141] libmachine: [stdout =====>] : 172.19.5.202
	
	I0229 02:18:21.027947    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:18:21.033045    8584 main.go:141] libmachine: Using SSH client type: native
	I0229 02:18:21.033045    8584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.5.202 22 <nil> <nil>}
	I0229 02:18:21.033569    8584 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709173096
	I0229 02:18:21.167765    8584 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Feb 29 02:18:16 UTC 2024
	
	I0229 02:18:21.167765    8584 fix.go:226] clock set: Thu Feb 29 02:18:16 UTC 2024
	 (err=<nil>)
	I0229 02:18:21.167765    8584 start.go:83] releasing machines lock for "multinode-314500-m02", held for 2m2.0900438s
	I0229 02:18:21.167765    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:18:23.153744    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:18:23.153744    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:18:23.153744    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:18:25.578574    8584 main.go:141] libmachine: [stdout =====>] : 172.19.5.202
	
	I0229 02:18:25.578574    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:18:25.578800    8584 out.go:177] * Found network options:
	I0229 02:18:25.580065    8584 out.go:177]   - NO_PROXY=172.19.2.165
	W0229 02:18:25.580612    8584 proxy.go:119] fail to check proxy env: Error ip not in block
	I0229 02:18:25.580835    8584 out.go:177]   - NO_PROXY=172.19.2.165
	W0229 02:18:25.581420    8584 proxy.go:119] fail to check proxy env: Error ip not in block
	W0229 02:18:25.583050    8584 proxy.go:119] fail to check proxy env: Error ip not in block
	I0229 02:18:25.585206    8584 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:18:25.585373    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:18:25.593744    8584 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0229 02:18:25.594079    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:18:27.674183    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:18:27.674183    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:18:27.674183    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:18:27.675185    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:18:27.675185    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:18:27.675185    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:18:30.173701    8584 main.go:141] libmachine: [stdout =====>] : 172.19.5.202
	
	I0229 02:18:30.174284    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:18:30.174503    8584 sshutil.go:53] new ssh client: &{IP:172.19.5.202 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m02\id_rsa Username:docker}
	I0229 02:18:30.199190    8584 main.go:141] libmachine: [stdout =====>] : 172.19.5.202
	
	I0229 02:18:30.199190    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:18:30.199500    8584 sshutil.go:53] new ssh client: &{IP:172.19.5.202 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m02\id_rsa Username:docker}
	I0229 02:18:30.277565    8584 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0229 02:18:30.278069    8584 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.6840656s)
	W0229 02:18:30.278069    8584 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 02:18:30.290955    8584 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:18:30.389381    8584 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0229 02:18:30.389381    8584 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0229 02:18:30.389381    8584 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.8038229s)
	I0229 02:18:30.389381    8584 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 02:18:30.389381    8584 start.go:475] detecting cgroup driver to use...
	I0229 02:18:30.389381    8584 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:18:30.425450    8584 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0229 02:18:30.436466    8584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0229 02:18:30.467218    8584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 02:18:30.486122    8584 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 02:18:30.494627    8584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 02:18:30.522647    8584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 02:18:30.553444    8584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 02:18:30.581124    8584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 02:18:30.616953    8584 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:18:30.644924    8584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 02:18:30.674292    8584 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:18:30.691155    8584 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0229 02:18:30.703168    8584 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:18:30.731843    8584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:18:30.943189    8584 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 02:18:30.974201    8584 start.go:475] detecting cgroup driver to use...
	I0229 02:18:30.984195    8584 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 02:18:31.010398    8584 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0229 02:18:31.010398    8584 command_runner.go:130] > [Unit]
	I0229 02:18:31.010398    8584 command_runner.go:130] > Description=Docker Application Container Engine
	I0229 02:18:31.010398    8584 command_runner.go:130] > Documentation=https://docs.docker.com
	I0229 02:18:31.010398    8584 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0229 02:18:31.010398    8584 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0229 02:18:31.010398    8584 command_runner.go:130] > StartLimitBurst=3
	I0229 02:18:31.010398    8584 command_runner.go:130] > StartLimitIntervalSec=60
	I0229 02:18:31.010398    8584 command_runner.go:130] > [Service]
	I0229 02:18:31.010398    8584 command_runner.go:130] > Type=notify
	I0229 02:18:31.010398    8584 command_runner.go:130] > Restart=on-failure
	I0229 02:18:31.010398    8584 command_runner.go:130] > Environment=NO_PROXY=172.19.2.165
	I0229 02:18:31.010398    8584 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0229 02:18:31.010398    8584 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0229 02:18:31.010398    8584 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0229 02:18:31.010931    8584 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0229 02:18:31.010981    8584 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0229 02:18:31.011019    8584 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0229 02:18:31.011019    8584 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0229 02:18:31.011082    8584 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0229 02:18:31.011138    8584 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0229 02:18:31.011138    8584 command_runner.go:130] > ExecStart=
	I0229 02:18:31.011197    8584 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0229 02:18:31.011243    8584 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0229 02:18:31.011243    8584 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0229 02:18:31.011315    8584 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0229 02:18:31.011359    8584 command_runner.go:130] > LimitNOFILE=infinity
	I0229 02:18:31.011359    8584 command_runner.go:130] > LimitNPROC=infinity
	I0229 02:18:31.011359    8584 command_runner.go:130] > LimitCORE=infinity
	I0229 02:18:31.011425    8584 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0229 02:18:31.011425    8584 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0229 02:18:31.011425    8584 command_runner.go:130] > TasksMax=infinity
	I0229 02:18:31.011495    8584 command_runner.go:130] > TimeoutStartSec=0
	I0229 02:18:31.011495    8584 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0229 02:18:31.011495    8584 command_runner.go:130] > Delegate=yes
	I0229 02:18:31.011557    8584 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0229 02:18:31.011557    8584 command_runner.go:130] > KillMode=process
	I0229 02:18:31.011557    8584 command_runner.go:130] > [Install]
	I0229 02:18:31.011626    8584 command_runner.go:130] > WantedBy=multi-user.target
	I0229 02:18:31.022514    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:18:31.053734    8584 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 02:18:31.093320    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:18:31.125810    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 02:18:31.159106    8584 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 02:18:31.209007    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 02:18:31.236274    8584 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:18:31.271193    8584 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0229 02:18:31.283174    8584 ssh_runner.go:195] Run: which cri-dockerd
	I0229 02:18:31.290285    8584 command_runner.go:130] > /usr/bin/cri-dockerd
	I0229 02:18:31.300670    8584 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 02:18:31.320930    8584 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0229 02:18:31.363898    8584 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 02:18:31.567044    8584 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 02:18:31.755853    8584 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 02:18:31.755981    8584 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 02:18:31.800154    8584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:18:32.002260    8584 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 02:18:33.510987    8584 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5086429s)
	I0229 02:18:33.521617    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0229 02:18:33.555076    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 02:18:33.593354    8584 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0229 02:18:33.787890    8584 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0229 02:18:34.002397    8584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:18:34.193768    8584 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0229 02:18:34.233767    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 02:18:34.268183    8584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:18:34.461138    8584 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0229 02:18:34.565934    8584 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0229 02:18:34.575816    8584 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0229 02:18:34.586219    8584 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0229 02:18:34.586284    8584 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0229 02:18:34.586284    8584 command_runner.go:130] > Device: 0,22	Inode: 891         Links: 1
	I0229 02:18:34.586284    8584 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0229 02:18:34.586284    8584 command_runner.go:130] > Access: 2024-02-29 02:18:34.658282101 +0000
	I0229 02:18:34.586284    8584 command_runner.go:130] > Modify: 2024-02-29 02:18:34.658282101 +0000
	I0229 02:18:34.586284    8584 command_runner.go:130] > Change: 2024-02-29 02:18:34.662282244 +0000
	I0229 02:18:34.586356    8584 command_runner.go:130] >  Birth: -
	I0229 02:18:34.586415    8584 start.go:543] Will wait 60s for crictl version
	I0229 02:18:34.594891    8584 ssh_runner.go:195] Run: which crictl
	I0229 02:18:34.600806    8584 command_runner.go:130] > /usr/bin/crictl
	I0229 02:18:34.613152    8584 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 02:18:34.683047    8584 command_runner.go:130] > Version:  0.1.0
	I0229 02:18:34.683047    8584 command_runner.go:130] > RuntimeName:  docker
	I0229 02:18:34.683047    8584 command_runner.go:130] > RuntimeVersion:  24.0.7
	I0229 02:18:34.683047    8584 command_runner.go:130] > RuntimeApiVersion:  v1
	I0229 02:18:34.683047    8584 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0229 02:18:34.690707    8584 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 02:18:34.727739    8584 command_runner.go:130] > 24.0.7
	I0229 02:18:34.736706    8584 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 02:18:34.772261    8584 command_runner.go:130] > 24.0.7
	I0229 02:18:34.773681    8584 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0229 02:18:34.774281    8584 out.go:177]   - env NO_PROXY=172.19.2.165
	I0229 02:18:34.775285    8584 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0229 02:18:34.778553    8584 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0229 02:18:34.779106    8584 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0229 02:18:34.779106    8584 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0229 02:18:34.779106    8584 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:a6:a3:c1 Flags:up|broadcast|multicast|running}
	I0229 02:18:34.782065    8584 ip.go:210] interface addr: fe80::fc78:4865:5cac:d448/64
	I0229 02:18:34.782065    8584 ip.go:210] interface addr: 172.19.0.1/20
	I0229 02:18:34.790491    8584 ssh_runner.go:195] Run: grep 172.19.0.1	host.minikube.internal$ /etc/hosts
	I0229 02:18:34.796849    8584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:18:34.818492    8584 certs.go:56] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500 for IP: 172.19.5.202
	I0229 02:18:34.818492    8584 certs.go:190] acquiring lock for shared ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:18:34.818492    8584 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0229 02:18:34.818492    8584 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0229 02:18:34.819491    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0229 02:18:34.819491    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0229 02:18:34.819491    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0229 02:18:34.819491    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0229 02:18:34.819491    8584 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3312.pem (1338 bytes)
	W0229 02:18:34.820491    8584 certs.go:433] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3312_empty.pem, impossibly tiny 0 bytes
	I0229 02:18:34.820491    8584 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0229 02:18:34.820491    8584 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0229 02:18:34.820491    8584 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0229 02:18:34.820491    8584 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0229 02:18:34.821487    8584 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem (1708 bytes)
	I0229 02:18:34.821487    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:18:34.821487    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3312.pem -> /usr/share/ca-certificates/3312.pem
	I0229 02:18:34.821487    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem -> /usr/share/ca-certificates/33122.pem
	I0229 02:18:34.822487    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 02:18:34.868245    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 02:18:34.918714    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 02:18:34.967307    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 02:18:35.017796    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 02:18:35.066669    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3312.pem --> /usr/share/ca-certificates/3312.pem (1338 bytes)
	I0229 02:18:35.114276    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem --> /usr/share/ca-certificates/33122.pem (1708 bytes)
	I0229 02:18:35.168006    8584 ssh_runner.go:195] Run: openssl version
	I0229 02:18:35.176800    8584 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0229 02:18:35.185691    8584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 02:18:35.215735    8584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:18:35.222256    8584 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 29 00:45 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:18:35.222256    8584 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 00:45 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:18:35.230885    8584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:18:35.240332    8584 command_runner.go:130] > b5213941
	I0229 02:18:35.249159    8584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 02:18:35.281031    8584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3312.pem && ln -fs /usr/share/ca-certificates/3312.pem /etc/ssl/certs/3312.pem"
	I0229 02:18:35.309172    8584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3312.pem
	I0229 02:18:35.315998    8584 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 29 00:59 /usr/share/ca-certificates/3312.pem
	I0229 02:18:35.315998    8584 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 00:59 /usr/share/ca-certificates/3312.pem
	I0229 02:18:35.326720    8584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3312.pem
	I0229 02:18:35.335106    8584 command_runner.go:130] > 51391683
	I0229 02:18:35.344025    8584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3312.pem /etc/ssl/certs/51391683.0"
	I0229 02:18:35.372591    8584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/33122.pem && ln -fs /usr/share/ca-certificates/33122.pem /etc/ssl/certs/33122.pem"
	I0229 02:18:35.406771    8584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/33122.pem
	I0229 02:18:35.415262    8584 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 29 00:59 /usr/share/ca-certificates/33122.pem
	I0229 02:18:35.415680    8584 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 00:59 /usr/share/ca-certificates/33122.pem
	I0229 02:18:35.425523    8584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/33122.pem
	I0229 02:18:35.433811    8584 command_runner.go:130] > 3ec20f2e
	I0229 02:18:35.445146    8584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/33122.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 02:18:35.475114    8584 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 02:18:35.481743    8584 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 02:18:35.482501    8584 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 02:18:35.489621    8584 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0229 02:18:35.524210    8584 command_runner.go:130] > cgroupfs
	I0229 02:18:35.524318    8584 cni.go:84] Creating CNI manager for ""
	I0229 02:18:35.524318    8584 cni.go:136] 2 nodes found, recommending kindnet
	I0229 02:18:35.524318    8584 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 02:18:35.524429    8584 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.5.202 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-314500 NodeName:multinode-314500-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.2.165"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.5.202 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 02:18:35.524626    8584 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.5.202
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-314500-m02"
	  kubeletExtraArgs:
	    node-ip: 172.19.5.202
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.2.165"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 02:18:35.524738    8584 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-314500-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.5.202
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-314500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 02:18:35.533460    8584 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 02:18:35.552711    8584 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	I0229 02:18:35.552711    8584 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0229 02:18:35.561470    8584 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0229 02:18:35.584271    8584 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet
	I0229 02:18:35.584271    8584 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm
	I0229 02:18:35.584271    8584 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl
	I0229 02:18:36.998042    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0229 02:18:37.009077    8584 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0229 02:18:37.017133    8584 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0229 02:18:37.017341    8584 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0229 02:18:37.017341    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0229 02:18:40.084939    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0229 02:18:40.095940    8584 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0229 02:18:40.104473    8584 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0229 02:18:40.104473    8584 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0229 02:18:40.104473    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0229 02:18:45.263699    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:18:45.287939    8584 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0229 02:18:45.299336    8584 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0229 02:18:45.305390    8584 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0229 02:18:45.305390    8584 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0229 02:18:45.305390    8584 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0229 02:18:45.925172    8584 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0229 02:18:45.944660    8584 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I0229 02:18:45.978335    8584 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 02:18:46.017572    8584 ssh_runner.go:195] Run: grep 172.19.2.165	control-plane.minikube.internal$ /etc/hosts
	I0229 02:18:46.024303    8584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.2.165	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:18:46.045317    8584 host.go:66] Checking if "multinode-314500" exists ...
	I0229 02:18:46.045993    8584 config.go:182] Loaded profile config "multinode-314500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 02:18:46.045993    8584 start.go:304] JoinCluster: &{Name:multinode-314500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-314500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.19.2.165 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.5.202 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:18:46.046193    8584 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0229 02:18:46.046251    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:18:48.030615    8584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:18:48.030615    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:18:48.030726    8584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:18:50.433720    8584 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:18:50.434239    8584 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:18:50.434239    8584 sshutil.go:53] new ssh client: &{IP:172.19.2.165 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\id_rsa Username:docker}
	I0229 02:18:50.638259    8584 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token o9oq2m.h2bk0u2kuwdvt40c --discovery-token-ca-cert-hash sha256:9c722bf1323b6c4442b9327af3863f0d7e41785d89e27c3b473d4929b028e022 
	I0229 02:18:50.638259    8584 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (4.5918101s)
	I0229 02:18:50.638259    8584 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.19.5.202 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0229 02:18:50.638259    8584 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token o9oq2m.h2bk0u2kuwdvt40c --discovery-token-ca-cert-hash sha256:9c722bf1323b6c4442b9327af3863f0d7e41785d89e27c3b473d4929b028e022 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-314500-m02"
	I0229 02:18:50.699991    8584 command_runner.go:130] ! W0229 02:18:50.872733    1324 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0229 02:18:50.889853    8584 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:18:53.684561    8584 command_runner.go:130] > [preflight] Running pre-flight checks
	I0229 02:18:53.684561    8584 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0229 02:18:53.684561    8584 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0229 02:18:53.684561    8584 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:18:53.684561    8584 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:18:53.684561    8584 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0229 02:18:53.684715    8584 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0229 02:18:53.684715    8584 command_runner.go:130] > This node has joined the cluster:
	I0229 02:18:53.684715    8584 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0229 02:18:53.684715    8584 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0229 02:18:53.684715    8584 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0229 02:18:53.684802    8584 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token o9oq2m.h2bk0u2kuwdvt40c --discovery-token-ca-cert-hash sha256:9c722bf1323b6c4442b9327af3863f0d7e41785d89e27c3b473d4929b028e022 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-314500-m02": (3.0463738s)
	I0229 02:18:53.684802    8584 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0229 02:18:53.931915    8584 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0229 02:18:54.149000    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61 minikube.k8s.io/name=multinode-314500 minikube.k8s.io/updated_at=2024_02_29T02_18_54_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:18:54.276936    8584 command_runner.go:130] > node/multinode-314500-m02 labeled
	I0229 02:18:54.276936    8584 start.go:306] JoinCluster complete in 8.2304841s
	I0229 02:18:54.277943    8584 cni.go:84] Creating CNI manager for ""
	I0229 02:18:54.277943    8584 cni.go:136] 2 nodes found, recommending kindnet
	I0229 02:18:54.287322    8584 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0229 02:18:54.295314    8584 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0229 02:18:54.295314    8584 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0229 02:18:54.295314    8584 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0229 02:18:54.295430    8584 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0229 02:18:54.295430    8584 command_runner.go:130] > Access: 2024-02-29 02:14:07.987005400 +0000
	I0229 02:18:54.295430    8584 command_runner.go:130] > Modify: 2024-02-23 03:39:37.000000000 +0000
	I0229 02:18:54.295430    8584 command_runner.go:130] > Change: 2024-02-29 02:13:59.368000000 +0000
	I0229 02:18:54.295430    8584 command_runner.go:130] >  Birth: -
	I0229 02:18:54.295529    8584 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0229 02:18:54.295574    8584 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0229 02:18:54.339530    8584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0229 02:18:54.828066    8584 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0229 02:18:54.828174    8584 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0229 02:18:54.828174    8584 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0229 02:18:54.828174    8584 command_runner.go:130] > daemonset.apps/kindnet configured
	I0229 02:18:54.829484    8584 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 02:18:54.830286    8584 kapi.go:59] client config for multinode-314500: &rest.Config{Host:"https://172.19.2.165:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2480600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 02:18:54.831290    8584 round_trippers.go:463] GET https://172.19.2.165:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0229 02:18:54.831290    8584 round_trippers.go:469] Request Headers:
	I0229 02:18:54.831374    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:18:54.831374    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:18:54.847724    8584 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0229 02:18:54.847724    8584 round_trippers.go:577] Response Headers:
	I0229 02:18:54.847724    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:18:54.847724    8584 round_trippers.go:580]     Content-Length: 291
	I0229 02:18:54.847724    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:18:55 GMT
	I0229 02:18:54.847724    8584 round_trippers.go:580]     Audit-Id: e12071b6-30c0-4d6d-9023-573b3f854ed4
	I0229 02:18:54.847724    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:18:54.847724    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:18:54.847724    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:18:54.848623    8584 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b4cd7015-a823-43da-bf82-ae91c5678262","resourceVersion":"439","creationTimestamp":"2024-02-29T02:15:51Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0229 02:18:54.848743    8584 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-314500" context rescaled to 1 replicas
	I0229 02:18:54.848818    8584 start.go:223] Will wait 6m0s for node &{Name:m02 IP:172.19.5.202 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0229 02:18:54.849622    8584 out.go:177] * Verifying Kubernetes components...
	I0229 02:18:54.859551    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:18:54.884779    8584 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 02:18:54.885357    8584 kapi.go:59] client config for multinode-314500: &rest.Config{Host:"https://172.19.2.165:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2480600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 02:18:54.886093    8584 node_ready.go:35] waiting up to 6m0s for node "multinode-314500-m02" to be "Ready" ...
	I0229 02:18:54.886178    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:18:54.886178    8584 round_trippers.go:469] Request Headers:
	I0229 02:18:54.886263    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:18:54.886292    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:18:54.889540    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:18:54.889540    8584 round_trippers.go:577] Response Headers:
	I0229 02:18:54.889540    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:18:54.889540    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:18:54.889540    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:18:54.889540    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:18:55 GMT
	I0229 02:18:54.889540    8584 round_trippers.go:580]     Audit-Id: 16a67bb6-f9fa-47dc-9acc-fded8dd1ddf0
	I0229 02:18:54.889540    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:18:54.890077    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:18:55.391661    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:18:55.391763    8584 round_trippers.go:469] Request Headers:
	I0229 02:18:55.391763    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:18:55.391763    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:18:55.397889    8584 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 02:18:55.397956    8584 round_trippers.go:577] Response Headers:
	I0229 02:18:55.397956    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:18:55.397956    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:18:55.398023    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:18:55.398023    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:18:55.398023    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:18:55 GMT
	I0229 02:18:55.398023    8584 round_trippers.go:580]     Audit-Id: 76e07a31-ea9d-45a0-bac4-b0a49382c981
	I0229 02:18:55.398637    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:18:55.894750    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:18:55.894865    8584 round_trippers.go:469] Request Headers:
	I0229 02:18:55.894865    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:18:55.894865    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:18:55.898265    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:18:55.898265    8584 round_trippers.go:577] Response Headers:
	I0229 02:18:55.898265    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:18:55.898265    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:18:55.898265    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:18:55.898265    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:18:56 GMT
	I0229 02:18:55.898265    8584 round_trippers.go:580]     Audit-Id: db33e390-9484-47f5-9023-d4f5140c6a73
	I0229 02:18:55.898265    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:18:55.899762    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:18:56.397336    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:18:56.397336    8584 round_trippers.go:469] Request Headers:
	I0229 02:18:56.397336    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:18:56.397336    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:18:56.400945    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:18:56.400945    8584 round_trippers.go:577] Response Headers:
	I0229 02:18:56.400945    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:18:56.401544    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:18:56.401544    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:18:56 GMT
	I0229 02:18:56.401544    8584 round_trippers.go:580]     Audit-Id: 7b5663c7-4127-436f-a916-f944f1a9362c
	I0229 02:18:56.401544    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:18:56.401544    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:18:56.401804    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:18:56.899952    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:18:56.899952    8584 round_trippers.go:469] Request Headers:
	I0229 02:18:56.899952    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:18:56.899952    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:18:56.913982    8584 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0229 02:18:56.913982    8584 round_trippers.go:577] Response Headers:
	I0229 02:18:56.913982    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:18:56.913982    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:18:56.913982    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:18:56.913982    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:18:56.913982    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:18:57 GMT
	I0229 02:18:56.913982    8584 round_trippers.go:580]     Audit-Id: 8cfef47d-31b6-4936-8599-942d267d5c62
	I0229 02:18:56.916795    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:18:56.917437    8584 node_ready.go:58] node "multinode-314500-m02" has status "Ready":"False"
	I0229 02:18:57.388540    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:18:57.388540    8584 round_trippers.go:469] Request Headers:
	I0229 02:18:57.388540    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:18:57.388540    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:18:57.392537    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:18:57.392537    8584 round_trippers.go:577] Response Headers:
	I0229 02:18:57.392537    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:18:57.392537    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:18:57 GMT
	I0229 02:18:57.392537    8584 round_trippers.go:580]     Audit-Id: 9689fe16-0d2b-45b2-bb7b-66bf24615cf8
	I0229 02:18:57.392537    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:18:57.392537    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:18:57.392537    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:18:57.392737    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:18:57.905825    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:18:57.905825    8584 round_trippers.go:469] Request Headers:
	I0229 02:18:57.905825    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:18:57.905825    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:18:57.909488    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:18:57.909488    8584 round_trippers.go:577] Response Headers:
	I0229 02:18:57.909488    8584 round_trippers.go:580]     Audit-Id: 93cc2139-334a-44b0-a008-1bab083e526a
	I0229 02:18:57.910054    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:18:57.910054    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:18:57.910054    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:18:57.910054    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:18:57.910054    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:18:58 GMT
	I0229 02:18:57.910054    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:18:58.400349    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:18:58.400349    8584 round_trippers.go:469] Request Headers:
	I0229 02:18:58.400349    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:18:58.400349    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:18:58.404938    8584 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:18:58.404938    8584 round_trippers.go:577] Response Headers:
	I0229 02:18:58.404938    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:18:58.404938    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:18:58.404938    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:18:58 GMT
	I0229 02:18:58.404938    8584 round_trippers.go:580]     Audit-Id: 6e7258bb-b00b-4e60-87e5-7b6336f44acf
	I0229 02:18:58.405337    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:18:58.405337    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:18:58.406994    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:18:58.888065    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:18:58.888104    8584 round_trippers.go:469] Request Headers:
	I0229 02:18:58.888154    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:18:58.888154    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:18:58.892109    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:18:58.892515    8584 round_trippers.go:577] Response Headers:
	I0229 02:18:58.892515    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:18:58.892515    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:18:58.892515    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:18:58.892515    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:18:58.892515    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:18:59 GMT
	I0229 02:18:58.892515    8584 round_trippers.go:580]     Audit-Id: a77bafa9-ce1a-4082-a191-10262cf4fc99
	I0229 02:18:58.892786    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:18:59.391822    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:18:59.391822    8584 round_trippers.go:469] Request Headers:
	I0229 02:18:59.391822    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:18:59.391822    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:18:59.397773    8584 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 02:18:59.397840    8584 round_trippers.go:577] Response Headers:
	I0229 02:18:59.397878    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:18:59.397878    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:18:59.397878    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:18:59.397878    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:18:59.397878    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:18:59 GMT
	I0229 02:18:59.397878    8584 round_trippers.go:580]     Audit-Id: f98af41c-d5cf-447b-97f9-e89ff1495066
	I0229 02:18:59.398819    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:18:59.399208    8584 node_ready.go:58] node "multinode-314500-m02" has status "Ready":"False"
	I0229 02:18:59.899172    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:18:59.899172    8584 round_trippers.go:469] Request Headers:
	I0229 02:18:59.899241    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:18:59.899241    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:18:59.902652    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:18:59.902652    8584 round_trippers.go:577] Response Headers:
	I0229 02:18:59.902652    8584 round_trippers.go:580]     Audit-Id: 5f01caf7-30bf-495c-889c-847503d5df90
	I0229 02:18:59.902652    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:18:59.902652    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:18:59.902652    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:18:59.902652    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:18:59.902652    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:00 GMT
	I0229 02:18:59.903665    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:19:00.389363    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:00.389363    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:00.389363    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:00.389447    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:00.393244    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:19:00.393502    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:00.393502    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:00 GMT
	I0229 02:19:00.393502    8584 round_trippers.go:580]     Audit-Id: c46b7762-54e7-4b1c-bff0-200199beca33
	I0229 02:19:00.393502    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:00.393502    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:00.393502    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:00.393502    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:00.393735    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:19:00.896187    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:00.896187    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:00.896270    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:00.896270    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:00.906719    8584 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0229 02:19:00.906719    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:00.906719    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:01 GMT
	I0229 02:19:00.906719    8584 round_trippers.go:580]     Audit-Id: 51e07ad7-2bc2-406a-a4af-4f3e1efa975e
	I0229 02:19:00.906719    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:00.906719    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:00.906719    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:00.906719    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:00.906719    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:19:01.387637    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:01.387637    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:01.387637    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:01.387637    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:01.428791    8584 round_trippers.go:574] Response Status: 200 OK in 41 milliseconds
	I0229 02:19:01.429599    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:01.429599    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:01.429599    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:01.429599    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:01.429599    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:01 GMT
	I0229 02:19:01.429599    8584 round_trippers.go:580]     Audit-Id: 119db968-13f7-4535-8658-337189a296ea
	I0229 02:19:01.429599    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:01.430142    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:19:01.430583    8584 node_ready.go:58] node "multinode-314500-m02" has status "Ready":"False"
	I0229 02:19:01.888493    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:01.888493    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:01.888493    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:01.888493    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:01.891732    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:19:01.891732    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:01.891732    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:01.891732    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:01.891732    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:02 GMT
	I0229 02:19:01.891732    8584 round_trippers.go:580]     Audit-Id: 42832318-f25b-490f-aff7-877895b7a3ba
	I0229 02:19:01.892570    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:01.892570    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:01.892677    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:19:02.396657    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:02.396657    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:02.396657    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:02.396657    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:02.399223    8584 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:19:02.399223    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:02.399223    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:02.399223    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:02.399223    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:02.399223    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:02.399223    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:02 GMT
	I0229 02:19:02.399223    8584 round_trippers.go:580]     Audit-Id: 35d2dceb-2382-4616-b8e5-6a0d14e043ab
	I0229 02:19:02.400063    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:19:02.900535    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:02.900535    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:02.900535    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:02.900535    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:02.905068    8584 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:19:02.905068    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:02.905068    8584 round_trippers.go:580]     Audit-Id: ace017cd-ee9f-4bd0-9b52-397013c1b792
	I0229 02:19:02.905068    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:02.905068    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:02.905068    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:02.905068    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:02.905068    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:03 GMT
	I0229 02:19:02.905391    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:19:03.394230    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:03.394230    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:03.394230    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:03.394230    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:03.396650    8584 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:19:03.396650    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:03.396650    8584 round_trippers.go:580]     Audit-Id: 5481e7b5-4a4c-446d-a04a-bc2f56d87626
	I0229 02:19:03.396650    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:03.396650    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:03.396650    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:03.396650    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:03.396650    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:03 GMT
	I0229 02:19:03.397840    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"593","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 02:19:03.886639    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:03.886639    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:03.886639    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:03.886639    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:03.890655    8584 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:19:03.890716    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:03.890716    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:03.890716    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:03.890784    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:03.890784    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:03.890784    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:04 GMT
	I0229 02:19:03.890784    8584 round_trippers.go:580]     Audit-Id: e654a293-e86e-4326-8709-9c556c1b6a16
	I0229 02:19:03.890957    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"610","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0229 02:19:03.891302    8584 node_ready.go:58] node "multinode-314500-m02" has status "Ready":"False"
	I0229 02:19:04.395161    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:04.395161    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:04.395161    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:04.395161    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:04.398988    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:19:04.398988    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:04.398988    8584 round_trippers.go:580]     Audit-Id: 5bd0cf7c-4754-40ca-abc1-50d4188e1af1
	I0229 02:19:04.398988    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:04.398988    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:04.398988    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:04.399337    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:04.399337    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:04 GMT
	I0229 02:19:04.399498    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"610","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0229 02:19:04.900506    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:04.900506    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:04.900588    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:04.900588    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:04.904345    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:19:04.904345    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:04.904345    8584 round_trippers.go:580]     Audit-Id: ea6f6d91-b34a-498d-9365-83f52c171ba8
	I0229 02:19:04.904345    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:04.904345    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:04.904345    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:04.904345    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:04.904345    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:05 GMT
	I0229 02:19:04.905267    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"610","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0229 02:19:05.390945    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:05.391025    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:05.391025    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:05.391025    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:05.394999    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:19:05.395256    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:05.395256    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:05.395256    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:05 GMT
	I0229 02:19:05.395256    8584 round_trippers.go:580]     Audit-Id: 48db6fca-fdd8-4b8e-8acf-d8508f01bc99
	I0229 02:19:05.395256    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:05.395256    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:05.395256    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:05.395433    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"610","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0229 02:19:05.897185    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:05.897253    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:05.897253    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:05.897253    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:05.901327    8584 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:19:05.901327    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:05.901327    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:05.901327    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:05.901327    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:06 GMT
	I0229 02:19:05.901327    8584 round_trippers.go:580]     Audit-Id: cc504900-e223-4f88-81bf-24d20ae238cd
	I0229 02:19:05.901327    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:05.901327    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:05.901610    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"610","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0229 02:19:05.901610    8584 node_ready.go:58] node "multinode-314500-m02" has status "Ready":"False"
	I0229 02:19:06.399376    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:06.399376    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:06.399445    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:06.399445    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:06.402595    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:19:06.402595    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:06.402595    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:06.402595    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:06.402595    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:06.402595    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:06 GMT
	I0229 02:19:06.402595    8584 round_trippers.go:580]     Audit-Id: 9f0e2a8e-137c-4cc5-9263-1f23093b3170
	I0229 02:19:06.402595    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:06.403455    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"610","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0229 02:19:06.899253    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:06.899323    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:06.899323    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:06.899323    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:06.903424    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:19:06.903424    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:06.903485    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:06.903485    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:06.903485    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:06.903485    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:07 GMT
	I0229 02:19:06.903485    8584 round_trippers.go:580]     Audit-Id: 70639d9f-98b5-4954-9cc2-ddac86c9913d
	I0229 02:19:06.903485    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:06.903620    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"610","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0229 02:19:07.401908    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:07.401994    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:07.402081    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:07.402081    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:07.405358    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:19:07.405358    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:07.405358    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:07.405358    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:07.405358    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:07 GMT
	I0229 02:19:07.406237    8584 round_trippers.go:580]     Audit-Id: 76fc92ca-3360-4c4e-bd5f-1f7bf5cc52d9
	I0229 02:19:07.406237    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:07.406237    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:07.406494    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"610","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0229 02:19:07.888332    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:07.888410    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:07.888410    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:07.888410    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:07.894132    8584 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 02:19:07.894651    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:07.894736    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:08 GMT
	I0229 02:19:07.894736    8584 round_trippers.go:580]     Audit-Id: 1c3c1ce0-0769-425d-afd3-d1bd32756322
	I0229 02:19:07.894736    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:07.894736    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:07.894736    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:07.894736    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:07.894736    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"610","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0229 02:19:08.389430    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:08.389523    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:08.389523    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:08.389523    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:08.392857    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:19:08.392857    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:08.392857    8584 round_trippers.go:580]     Audit-Id: b2f601d5-c1ee-47a1-b56e-755a0c4ad649
	I0229 02:19:08.393710    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:08.393710    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:08.393710    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:08.393710    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:08.393710    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:08 GMT
	I0229 02:19:08.393710    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"610","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0229 02:19:08.393710    8584 node_ready.go:58] node "multinode-314500-m02" has status "Ready":"False"
	I0229 02:19:08.887326    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:08.887326    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:08.887326    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:08.887326    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:08.891027    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:19:08.891027    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:08.891027    8584 round_trippers.go:580]     Audit-Id: 0f021001-a406-44d6-94d8-93ef736fbe42
	I0229 02:19:08.891670    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:08.891670    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:08.891670    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:08.891670    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:08.891670    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:09 GMT
	I0229 02:19:08.892089    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"610","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0229 02:19:09.389425    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:09.389425    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:09.389425    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:09.389425    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:09.396421    8584 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 02:19:09.396421    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:09.396421    8584 round_trippers.go:580]     Audit-Id: 5fbac51a-70b7-4815-bf98-6c7af5b38950
	I0229 02:19:09.396421    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:09.396421    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:09.396421    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:09.396421    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:09.396421    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:09 GMT
	I0229 02:19:09.396421    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"610","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0229 02:19:09.894460    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:09.894728    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:09.894728    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:09.894728    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:09.898034    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:19:09.898034    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:09.898864    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:10 GMT
	I0229 02:19:09.898864    8584 round_trippers.go:580]     Audit-Id: 30013302-3f77-4414-bdaf-b073ae7cc7ad
	I0229 02:19:09.898864    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:09.898864    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:09.898864    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:09.898864    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:09.899055    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"622","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3255 chars]
	I0229 02:19:09.899681    8584 node_ready.go:49] node "multinode-314500-m02" has status "Ready":"True"
	I0229 02:19:09.899760    8584 node_ready.go:38] duration metric: took 15.0128311s waiting for node "multinode-314500-m02" to be "Ready" ...
	I0229 02:19:09.899760    8584 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:19:09.899988    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods
	I0229 02:19:09.899988    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:09.900078    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:09.900078    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:09.906930    8584 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 02:19:09.906930    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:09.906930    8584 round_trippers.go:580]     Audit-Id: 89da8e7e-82dd-4ddb-8b70-e96b345eeabf
	I0229 02:19:09.906930    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:09.906930    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:09.907383    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:09.907383    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:09.907383    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:10 GMT
	I0229 02:19:09.908247    8584 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"622"},"items":[{"metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"435","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67426 chars]
	I0229 02:19:09.910949    8584 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace to be "Ready" ...
	I0229 02:19:09.911270    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:19:09.911270    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:09.911270    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:09.911270    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:09.913489    8584 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:19:09.913489    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:09.914473    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:10 GMT
	I0229 02:19:09.914473    8584 round_trippers.go:580]     Audit-Id: 7bc2e034-8bca-4f19-a593-29d856effd79
	I0229 02:19:09.914473    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:09.914473    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:09.914473    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:09.914473    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:09.914473    8584 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"435","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6282 chars]
	I0229 02:19:09.914473    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:19:09.915219    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:09.915219    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:09.915219    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:09.917425    8584 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:19:09.917425    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:09.918440    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:09.918440    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:10 GMT
	I0229 02:19:09.918440    8584 round_trippers.go:580]     Audit-Id: 4809d0f9-91ec-4b02-b3ae-312c0e7cd898
	I0229 02:19:09.918440    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:09.918440    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:09.918440    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:09.918977    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"445","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4957 chars]
	I0229 02:19:09.919175    8584 pod_ready.go:92] pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace has status "Ready":"True"
	I0229 02:19:09.919175    8584 pod_ready.go:81] duration metric: took 7.9754ms waiting for pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace to be "Ready" ...
	I0229 02:19:09.919175    8584 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:19:09.919175    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-314500
	I0229 02:19:09.919700    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:09.919700    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:09.919700    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:09.921900    8584 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:19:09.922797    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:09.922797    8584 round_trippers.go:580]     Audit-Id: 99d5dfdd-529d-414a-bbab-ec3564725035
	I0229 02:19:09.922797    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:09.922797    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:09.922797    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:09.922797    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:09.922869    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:10 GMT
	I0229 02:19:09.922990    8584 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-314500","namespace":"kube-system","uid":"6fc42e7c-48f9-46df-bf2f-861e0684e37f","resourceVersion":"323","creationTimestamp":"2024-02-29T02:15:52Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.2.165:2379","kubernetes.io/config.hash":"0b84e88097a2b59a9c108b0f9fa2b889","kubernetes.io/config.mirror":"0b84e88097a2b59a9c108b0f9fa2b889","kubernetes.io/config.seen":"2024-02-29T02:15:52.221392786Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:15:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5852 chars]
	I0229 02:19:09.923537    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:19:09.923537    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:09.923537    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:09.923537    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:09.926234    8584 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:19:09.926984    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:09.926984    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:10 GMT
	I0229 02:19:09.926984    8584 round_trippers.go:580]     Audit-Id: df2638c9-ac54-4653-bb22-db74ffa3024c
	I0229 02:19:09.926984    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:09.926984    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:09.926984    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:09.926984    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:09.927160    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"445","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4957 chars]
	I0229 02:19:09.927439    8584 pod_ready.go:92] pod "etcd-multinode-314500" in "kube-system" namespace has status "Ready":"True"
	I0229 02:19:09.927439    8584 pod_ready.go:81] duration metric: took 8.2637ms waiting for pod "etcd-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:19:09.927439    8584 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:19:09.927439    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-314500
	I0229 02:19:09.927439    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:09.927439    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:09.927439    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:09.930125    8584 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:19:09.930125    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:09.930125    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:10 GMT
	I0229 02:19:09.930125    8584 round_trippers.go:580]     Audit-Id: 7d1cb678-5653-4d94-81c2-91c8fa733734
	I0229 02:19:09.930125    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:09.930125    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:09.930125    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:09.930125    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:09.931265    8584 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-314500","namespace":"kube-system","uid":"fc266082-ff2c-4bd1-951f-11dc765a28ae","resourceVersion":"303","creationTimestamp":"2024-02-29T02:15:52Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.2.165:8443","kubernetes.io/config.hash":"75abc10fab898952206cc1d682d3c922","kubernetes.io/config.mirror":"75abc10fab898952206cc1d682d3c922","kubernetes.io/config.seen":"2024-02-29T02:15:52.221397486Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:15:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7390 chars]
	I0229 02:19:09.931368    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:19:09.931368    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:09.931368    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:09.931368    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:09.933978    8584 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:19:09.933978    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:09.933978    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:10 GMT
	I0229 02:19:09.934688    8584 round_trippers.go:580]     Audit-Id: d82b569f-a41c-4dec-b10e-f07a48060338
	I0229 02:19:09.934688    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:09.934688    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:09.934688    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:09.934688    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:09.934688    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"445","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4957 chars]
	I0229 02:19:09.935545    8584 pod_ready.go:92] pod "kube-apiserver-multinode-314500" in "kube-system" namespace has status "Ready":"True"
	I0229 02:19:09.935545    8584 pod_ready.go:81] duration metric: took 8.1061ms waiting for pod "kube-apiserver-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:19:09.935605    8584 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:19:09.935677    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-314500
	I0229 02:19:09.935677    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:09.935677    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:09.935677    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:09.938290    8584 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:19:09.938290    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:09.938290    8584 round_trippers.go:580]     Audit-Id: f1c6fb4d-9811-4d1e-b351-72c1daa1ec71
	I0229 02:19:09.938290    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:09.938290    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:09.938290    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:09.938290    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:09.938290    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:10 GMT
	I0229 02:19:09.938290    8584 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-314500","namespace":"kube-system","uid":"58e57902-e113-44a9-b5b5-4aba2ba13491","resourceVersion":"302","creationTimestamp":"2024-02-29T02:15:52Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"46f4a0cce9ca64e19c1ad09d6f30ce1e","kubernetes.io/config.mirror":"46f4a0cce9ca64e19c1ad09d6f30ce1e","kubernetes.io/config.seen":"2024-02-29T02:15:52.221398986Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:15:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6965 chars]
	I0229 02:19:09.939348    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:19:09.939348    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:09.939348    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:09.939348    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:09.943696    8584 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:19:09.943696    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:09.943696    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:10 GMT
	I0229 02:19:09.943696    8584 round_trippers.go:580]     Audit-Id: 8b6a9ffa-4316-4827-a442-9ff4f30d586a
	I0229 02:19:09.943696    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:09.943918    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:09.943918    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:09.943918    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:09.944022    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"445","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4957 chars]
	I0229 02:19:09.944022    8584 pod_ready.go:92] pod "kube-controller-manager-multinode-314500" in "kube-system" namespace has status "Ready":"True"
	I0229 02:19:09.944022    8584 pod_ready.go:81] duration metric: took 8.417ms waiting for pod "kube-controller-manager-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:19:09.944022    8584 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4gbrl" in "kube-system" namespace to be "Ready" ...
	I0229 02:19:10.097935    8584 request.go:629] Waited for 152.8628ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4gbrl
	I0229 02:19:10.098174    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4gbrl
	I0229 02:19:10.098219    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:10.098219    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:10.098219    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:10.104877    8584 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 02:19:10.104877    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:10.104877    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:10.104877    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:10.104877    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:10.104877    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:10.104877    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:10 GMT
	I0229 02:19:10.104877    8584 round_trippers.go:580]     Audit-Id: 9eb11c5e-881c-42bc-9be1-5f24ca6abc36
	I0229 02:19:10.105667    8584 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4gbrl","generateName":"kube-proxy-","namespace":"kube-system","uid":"accb56cb-79ee-4f16-b05e-91bf554c4a60","resourceVersion":"606","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"99934fe5-0d72-4e83-8f59-4a0b59969008","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"99934fe5-0d72-4e83-8f59-4a0b59969008\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I0229 02:19:10.301343    8584 request.go:629] Waited for 194.8528ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:10.301407    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:19:10.301407    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:10.301407    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:10.301407    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:10.304982    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:19:10.304982    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:10.304982    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:10.304982    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:10.305600    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:10.305600    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:10 GMT
	I0229 02:19:10.305600    8584 round_trippers.go:580]     Audit-Id: c18086a1-3697-45c4-8944-d8d7689207d6
	I0229 02:19:10.305600    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:10.305690    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"622","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_18_54_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3255 chars]
	I0229 02:19:10.306238    8584 pod_ready.go:92] pod "kube-proxy-4gbrl" in "kube-system" namespace has status "Ready":"True"
	I0229 02:19:10.306384    8584 pod_ready.go:81] duration metric: took 362.2941ms waiting for pod "kube-proxy-4gbrl" in "kube-system" namespace to be "Ready" ...
	I0229 02:19:10.306444    8584 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6r6j4" in "kube-system" namespace to be "Ready" ...
	I0229 02:19:10.504518    8584 request.go:629] Waited for 197.6938ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6r6j4
	I0229 02:19:10.504606    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6r6j4
	I0229 02:19:10.504682    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:10.504682    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:10.504682    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:10.511019    8584 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 02:19:10.511019    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:10.511019    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:10.511019    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:10.511019    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:10.511019    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:10.511019    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:10 GMT
	I0229 02:19:10.511019    8584 round_trippers.go:580]     Audit-Id: 4ae6576d-36bf-4327-85f5-11b14661f5ab
	I0229 02:19:10.511729    8584 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6r6j4","generateName":"kube-proxy-","namespace":"kube-system","uid":"2b84b22d-3786-4f9e-a23a-c7cfc93bb671","resourceVersion":"394","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"99934fe5-0d72-4e83-8f59-4a0b59969008","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"99934fe5-0d72-4e83-8f59-4a0b59969008\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
	I0229 02:19:10.706346    8584 request.go:629] Waited for 193.8669ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:19:10.706642    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:19:10.706642    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:10.706642    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:10.706642    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:10.712840    8584 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 02:19:10.712895    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:10.712978    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:10.713002    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:10 GMT
	I0229 02:19:10.713002    8584 round_trippers.go:580]     Audit-Id: c5866151-f886-44d1-8800-b5f13dbf5b70
	I0229 02:19:10.713002    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:10.713002    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:10.713002    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:10.713002    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"445","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4957 chars]
	I0229 02:19:10.713751    8584 pod_ready.go:92] pod "kube-proxy-6r6j4" in "kube-system" namespace has status "Ready":"True"
	I0229 02:19:10.713751    8584 pod_ready.go:81] duration metric: took 407.2841ms waiting for pod "kube-proxy-6r6j4" in "kube-system" namespace to be "Ready" ...
	I0229 02:19:10.713751    8584 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:19:10.908577    8584 request.go:629] Waited for 194.7255ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-314500
	I0229 02:19:10.908997    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-314500
	I0229 02:19:10.908997    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:10.908997    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:10.908997    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:10.912468    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:19:10.912468    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:10.912468    8584 round_trippers.go:580]     Audit-Id: a20a5e1a-e0b4-47eb-ab35-b1c357c97ae2
	I0229 02:19:10.912468    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:10.912468    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:10.912468    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:10.912468    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:10.912468    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:11 GMT
	I0229 02:19:10.913104    8584 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-314500","namespace":"kube-system","uid":"31fcecc6-17de-43a6-892d-37cd915de64b","resourceVersion":"288","creationTimestamp":"2024-02-29T02:15:52Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"3d9a79ff068a0922524863a8caa5053a","kubernetes.io/config.mirror":"3d9a79ff068a0922524863a8caa5053a","kubernetes.io/config.seen":"2024-02-29T02:15:52.221399886Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:15:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4695 chars]
	I0229 02:19:11.095871    8584 request.go:629] Waited for 181.8524ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:19:11.096146    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes/multinode-314500
	I0229 02:19:11.096146    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:11.096146    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:11.096146    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:11.104050    8584 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 02:19:11.104316    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:11.104316    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:11 GMT
	I0229 02:19:11.104316    8584 round_trippers.go:580]     Audit-Id: 7e9bd965-e810-45bc-85a8-4bb609661efb
	I0229 02:19:11.104316    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:11.104316    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:11.104316    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:11.104368    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:11.104637    8584 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"445","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","fi [truncated 4957 chars]
	I0229 02:19:11.105147    8584 pod_ready.go:92] pod "kube-scheduler-multinode-314500" in "kube-system" namespace has status "Ready":"True"
	I0229 02:19:11.105147    8584 pod_ready.go:81] duration metric: took 391.3742ms waiting for pod "kube-scheduler-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:19:11.105147    8584 pod_ready.go:38] duration metric: took 1.2053198s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:19:11.105147    8584 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 02:19:11.114287    8584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:19:11.138275    8584 system_svc.go:56] duration metric: took 33.1261ms WaitForService to wait for kubelet.
	I0229 02:19:11.138407    8584 kubeadm.go:581] duration metric: took 16.2886816s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 02:19:11.138478    8584 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:19:11.300588    8584 request.go:629] Waited for 161.8606ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.165:8443/api/v1/nodes
	I0229 02:19:11.300980    8584 round_trippers.go:463] GET https://172.19.2.165:8443/api/v1/nodes
	I0229 02:19:11.300980    8584 round_trippers.go:469] Request Headers:
	I0229 02:19:11.300980    8584 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:19:11.300980    8584 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:19:11.304358    8584 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:19:11.304358    8584 round_trippers.go:577] Response Headers:
	I0229 02:19:11.304358    8584 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:19:11 GMT
	I0229 02:19:11.304358    8584 round_trippers.go:580]     Audit-Id: 51c168c1-a4fe-434a-973b-2f988dadac6f
	I0229 02:19:11.304358    8584 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:19:11.304358    8584 round_trippers.go:580]     Content-Type: application/json
	I0229 02:19:11.304358    8584 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:19:11.304358    8584 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:19:11.305480    8584 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"624"},"items":[{"metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"445","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 9257 chars]
	I0229 02:19:11.306090    8584 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:19:11.306162    8584 node_conditions.go:123] node cpu capacity is 2
	I0229 02:19:11.306162    8584 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:19:11.306162    8584 node_conditions.go:123] node cpu capacity is 2
	I0229 02:19:11.306162    8584 node_conditions.go:105] duration metric: took 167.6741ms to run NodePressure ...
	I0229 02:19:11.306162    8584 start.go:228] waiting for startup goroutines ...
	I0229 02:19:11.306266    8584 start.go:242] writing updated cluster config ...
	I0229 02:19:11.315752    8584 ssh_runner.go:195] Run: rm -f paused
	I0229 02:19:11.444114    8584 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 02:19:11.444987    8584 out.go:177] * Done! kubectl is now configured to use "multinode-314500" cluster and "default" namespace by default
	
	
	==> Docker <==
	Feb 29 02:16:16 multinode-314500 dockerd[1292]: time="2024-02-29T02:16:16.836943598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 02:16:16 multinode-314500 dockerd[1292]: time="2024-02-29T02:16:16.844762626Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 02:16:16 multinode-314500 dockerd[1292]: time="2024-02-29T02:16:16.844839230Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 02:16:16 multinode-314500 dockerd[1292]: time="2024-02-29T02:16:16.844857831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 02:16:16 multinode-314500 dockerd[1292]: time="2024-02-29T02:16:16.845360758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 02:16:16 multinode-314500 cri-dockerd[1179]: time="2024-02-29T02:16:16Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/13f6ae46b7d00cb80295b3fe4d8eaa84529c5242f022e3b07bef994969a9441e/resolv.conf as [nameserver 172.19.0.1]"
	Feb 29 02:16:17 multinode-314500 cri-dockerd[1179]: time="2024-02-29T02:16:17Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8c944d91b62504f7fd894d21889df5d67be765e4f02c1950a7a2a05132205f99/resolv.conf as [nameserver 172.19.0.1]"
	Feb 29 02:16:17 multinode-314500 dockerd[1292]: time="2024-02-29T02:16:17.077064890Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 02:16:17 multinode-314500 dockerd[1292]: time="2024-02-29T02:16:17.077136794Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 02:16:17 multinode-314500 dockerd[1292]: time="2024-02-29T02:16:17.077154495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 02:16:17 multinode-314500 dockerd[1292]: time="2024-02-29T02:16:17.077248800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 02:16:17 multinode-314500 dockerd[1292]: time="2024-02-29T02:16:17.216491649Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 02:16:17 multinode-314500 dockerd[1292]: time="2024-02-29T02:16:17.216758964Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 02:16:17 multinode-314500 dockerd[1292]: time="2024-02-29T02:16:17.217093082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 02:16:17 multinode-314500 dockerd[1292]: time="2024-02-29T02:16:17.217451101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 02:19:35 multinode-314500 dockerd[1292]: time="2024-02-29T02:19:35.111682320Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 02:19:35 multinode-314500 dockerd[1292]: time="2024-02-29T02:19:35.112609163Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 02:19:35 multinode-314500 dockerd[1292]: time="2024-02-29T02:19:35.112830174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 02:19:35 multinode-314500 dockerd[1292]: time="2024-02-29T02:19:35.113067885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 02:19:35 multinode-314500 cri-dockerd[1179]: time="2024-02-29T02:19:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ffe504a01e326c3100f593c8c5221a31307571eedec738e86cb135ea892fdda2/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Feb 29 02:19:36 multinode-314500 cri-dockerd[1179]: time="2024-02-29T02:19:36Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Feb 29 02:19:36 multinode-314500 dockerd[1292]: time="2024-02-29T02:19:36.486937597Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 02:19:36 multinode-314500 dockerd[1292]: time="2024-02-29T02:19:36.487123907Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 02:19:36 multinode-314500 dockerd[1292]: time="2024-02-29T02:19:36.487169510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 02:19:36 multinode-314500 dockerd[1292]: time="2024-02-29T02:19:36.487422023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	56fdd268ee231       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   5 minutes ago       Running             busybox                   0                   ffe504a01e326       busybox-5b5d89c9d6-qcblm
	11c14ebdfaf67       ead0a4a53df89                                                                                         9 minutes ago       Running             coredns                   0                   8c944d91b6250       coredns-5dd5756b68-8g6tg
	cf65b06d29a0d       6e38f40d628db                                                                                         9 minutes ago       Running             storage-provisioner       0                   13f6ae46b7d00       storage-provisioner
	dd61788b0a0d8       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              9 minutes ago       Running             kindnet-cni               0                   edb41bd5e75d4       kindnet-t9r77
	c93e331307466       83f6cc407eed8                                                                                         9 minutes ago       Running             kube-proxy                0                   4b10f8bd940b8       kube-proxy-6r6j4
	e5bc2b41493bf       73deb9a3f7025                                                                                         9 minutes ago       Running             etcd                      0                   b93004a3ca704       etcd-multinode-314500
	ab0c4864aee58       e3db313c6dbc0                                                                                         9 minutes ago       Running             kube-scheduler            0                   bf7b9750ae9ea       kube-scheduler-multinode-314500
	26b1ab05f99a9       d058aa5ab969c                                                                                         9 minutes ago       Running             kube-controller-manager   0                   96810146c69cf       kube-controller-manager-multinode-314500
	9815e253e1a06       7fe0e6f37db33                                                                                         9 minutes ago       Running             kube-apiserver            0                   2d13a46d83899       kube-apiserver-multinode-314500
	
	
	==> coredns [11c14ebdfaf6] <==
	[INFO] 10.244.1.2:39886 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00019781s
	[INFO] 10.244.0.3:51772 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000254814s
	[INFO] 10.244.0.3:55803 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000074704s
	[INFO] 10.244.0.3:52953 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000063204s
	[INFO] 10.244.0.3:35356 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000217512s
	[INFO] 10.244.0.3:51868 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000073604s
	[INFO] 10.244.0.3:43420 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000103505s
	[INFO] 10.244.0.3:51899 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000210611s
	[INFO] 10.244.0.3:56850 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00018761s
	[INFO] 10.244.1.2:34482 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097705s
	[INFO] 10.244.1.2:36018 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000150108s
	[INFO] 10.244.1.2:50932 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064203s
	[INFO] 10.244.1.2:38051 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000129007s
	[INFO] 10.244.0.3:41360 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000316917s
	[INFO] 10.244.0.3:60778 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000160008s
	[INFO] 10.244.0.3:57010 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000133407s
	[INFO] 10.244.0.3:43292 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000127407s
	[INFO] 10.244.1.2:34858 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135708s
	[INFO] 10.244.1.2:60624 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000269714s
	[INFO] 10.244.1.2:46116 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000100405s
	[INFO] 10.244.1.2:57306 - 5 "PTR IN 1.0.19.172.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 100 0.000138608s
	[INFO] 10.244.0.3:57177 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000084804s
	[INFO] 10.244.0.3:55463 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000274415s
	[INFO] 10.244.0.3:36032 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000185809s
	[INFO] 10.244.0.3:42058 - 5 "PTR IN 1.0.19.172.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 100 0.000083604s
	
	
	==> describe nodes <==
	Name:               multinode-314500
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-314500
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61
	                    minikube.k8s.io/name=multinode-314500
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_29T02_15_53_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 02:15:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-314500
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Feb 2024 02:25:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 02:25:04 +0000   Thu, 29 Feb 2024 02:15:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 02:25:04 +0000   Thu, 29 Feb 2024 02:15:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 02:25:04 +0000   Thu, 29 Feb 2024 02:15:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Feb 2024 02:25:04 +0000   Thu, 29 Feb 2024 02:16:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.2.165
	  Hostname:    multinode-314500
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 fcca135ba85d4e2a802ef18b508e0e63
	  System UUID:                d0919ea2-7b7b-e246-9348-925d639776b8
	  Boot ID:                    2a7c10fd-1651-4220-b9f5-aa3595c1b1ae
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-qcblm                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m54s
	  kube-system                 coredns-5dd5756b68-8g6tg                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m23s
	  kube-system                 etcd-multinode-314500                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m36s
	  kube-system                 kindnet-t9r77                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m23s
	  kube-system                 kube-apiserver-multinode-314500             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m36s
	  kube-system                 kube-controller-manager-multinode-314500    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m36s
	  kube-system                 kube-proxy-6r6j4                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m23s
	  kube-system                 kube-scheduler-multinode-314500             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m36s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m21s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m45s (x8 over 9m45s)  kubelet          Node multinode-314500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m45s (x8 over 9m45s)  kubelet          Node multinode-314500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m45s (x7 over 9m45s)  kubelet          Node multinode-314500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m45s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m36s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m36s                  kubelet          Node multinode-314500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m36s                  kubelet          Node multinode-314500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m36s                  kubelet          Node multinode-314500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m24s                  node-controller  Node multinode-314500 event: Registered Node multinode-314500 in Controller
	  Normal  NodeReady                9m12s                  kubelet          Node multinode-314500 status is now: NodeReady
	
	
	Name:               multinode-314500-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-314500-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61
	                    minikube.k8s.io/name=multinode-314500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_02_29T02_18_54_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 02:18:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-314500-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Feb 2024 02:25:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 02:25:01 +0000   Thu, 29 Feb 2024 02:18:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 02:25:01 +0000   Thu, 29 Feb 2024 02:18:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 02:25:01 +0000   Thu, 29 Feb 2024 02:18:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Feb 2024 02:25:01 +0000   Thu, 29 Feb 2024 02:19:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.5.202
	  Hostname:    multinode-314500-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 77aee02c4bee424dbfd3564939d0a240
	  System UUID:                b1627b4d-7d75-ed47-9ee8-e9d67e74df72
	  Boot ID:                    87f7a67a-8d8e-41a1-ae90-0f8737e86f14
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-826w2    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m54s
	  kube-system                 kindnet-6r7b8               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m35s
	  kube-system                 kube-proxy-4gbrl            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m26s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m35s (x5 over 6m37s)  kubelet          Node multinode-314500-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m35s (x5 over 6m37s)  kubelet          Node multinode-314500-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m35s (x5 over 6m37s)  kubelet          Node multinode-314500-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m34s                  node-controller  Node multinode-314500-m02 event: Registered Node multinode-314500-m02 in Controller
	  Normal  NodeReady                6m19s                  kubelet          Node multinode-314500-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +1.779304] systemd-fstab-generator[113]: Ignoring "noauto" option for root device
	[Feb29 02:14] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +40.611904] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.181228] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[Feb29 02:15] systemd-fstab-generator[907]: Ignoring "noauto" option for root device
	[  +0.106381] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.524061] systemd-fstab-generator[948]: Ignoring "noauto" option for root device
	[  +0.195671] systemd-fstab-generator[960]: Ignoring "noauto" option for root device
	[  +0.235266] systemd-fstab-generator[974]: Ignoring "noauto" option for root device
	[  +1.802878] systemd-fstab-generator[1132]: Ignoring "noauto" option for root device
	[  +0.200825] systemd-fstab-generator[1144]: Ignoring "noauto" option for root device
	[  +0.187739] systemd-fstab-generator[1156]: Ignoring "noauto" option for root device
	[  +0.272932] systemd-fstab-generator[1171]: Ignoring "noauto" option for root device
	[ +12.596345] systemd-fstab-generator[1278]: Ignoring "noauto" option for root device
	[  +0.100135] kauditd_printk_skb: 205 callbacks suppressed
	[  +9.124872] systemd-fstab-generator[1655]: Ignoring "noauto" option for root device
	[  +0.104351] kauditd_printk_skb: 51 callbacks suppressed
	[  +8.767706] systemd-fstab-generator[2631]: Ignoring "noauto" option for root device
	[  +0.137526] kauditd_printk_skb: 62 callbacks suppressed
	[Feb29 02:16] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.600907] kauditd_printk_skb: 29 callbacks suppressed
	[Feb29 02:19] hrtimer: interrupt took 2175903 ns
	[  +0.988605] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [e5bc2b41493b] <==
	{"level":"info","ts":"2024-02-29T02:15:45.444825Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"288caba846397842 switched to configuration voters=(2921898997477636162)"}
	{"level":"info","ts":"2024-02-29T02:15:45.449232Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b70ab9772a44d22c","local-member-id":"288caba846397842","added-peer-id":"288caba846397842","added-peer-peer-urls":["https://172.19.2.165:2380"]}
	{"level":"info","ts":"2024-02-29T02:15:45.445002Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.19.2.165:2380"}
	{"level":"info","ts":"2024-02-29T02:15:45.451781Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"288caba846397842","initial-advertise-peer-urls":["https://172.19.2.165:2380"],"listen-peer-urls":["https://172.19.2.165:2380"],"advertise-client-urls":["https://172.19.2.165:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.19.2.165:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-02-29T02:15:45.451813Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-29T02:15:45.456207Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.19.2.165:2380"}
	{"level":"info","ts":"2024-02-29T02:15:46.279614Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"288caba846397842 is starting a new election at term 1"}
	{"level":"info","ts":"2024-02-29T02:15:46.279927Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"288caba846397842 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-02-29T02:15:46.280297Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"288caba846397842 received MsgPreVoteResp from 288caba846397842 at term 1"}
	{"level":"info","ts":"2024-02-29T02:15:46.280432Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"288caba846397842 became candidate at term 2"}
	{"level":"info","ts":"2024-02-29T02:15:46.280578Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"288caba846397842 received MsgVoteResp from 288caba846397842 at term 2"}
	{"level":"info","ts":"2024-02-29T02:15:46.280732Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"288caba846397842 became leader at term 2"}
	{"level":"info","ts":"2024-02-29T02:15:46.280856Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 288caba846397842 elected leader 288caba846397842 at term 2"}
	{"level":"info","ts":"2024-02-29T02:15:46.285663Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T02:15:46.289486Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"288caba846397842","local-member-attributes":"{Name:multinode-314500 ClientURLs:[https://172.19.2.165:2379]}","request-path":"/0/members/288caba846397842/attributes","cluster-id":"b70ab9772a44d22c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-29T02:15:46.289834Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T02:15:46.292192Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-29T02:15:46.295691Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b70ab9772a44d22c","local-member-id":"288caba846397842","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T02:15:46.29636Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T02:15:46.296607Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T02:15:46.295902Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T02:15:46.298395Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.19.2.165:2379"}
	{"level":"info","ts":"2024-02-29T02:15:46.344121Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-29T02:15:46.352275Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-29T02:19:03.699393Z","caller":"traceutil/trace.go:171","msg":"trace[2003273810] transaction","detail":"{read_only:false; response_revision:609; number_of_response:1; }","duration":"117.265217ms","start":"2024-02-29T02:19:03.582107Z","end":"2024-02-29T02:19:03.699373Z","steps":["trace[2003273810] 'process raft request'  (duration: 117.135811ms)"],"step_count":1}
	
	
	==> kernel <==
	 02:25:28 up 11 min,  0 users,  load average: 0.29, 0.31, 0.19
	Linux multinode-314500 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [dd61788b0a0d] <==
	I0229 02:24:22.859340       1 main.go:250] Node multinode-314500-m02 has CIDR [10.244.1.0/24] 
	I0229 02:24:32.866965       1 main.go:223] Handling node with IPs: map[172.19.2.165:{}]
	I0229 02:24:32.867005       1 main.go:227] handling current node
	I0229 02:24:32.867016       1 main.go:223] Handling node with IPs: map[172.19.5.202:{}]
	I0229 02:24:32.867022       1 main.go:250] Node multinode-314500-m02 has CIDR [10.244.1.0/24] 
	I0229 02:24:42.879465       1 main.go:223] Handling node with IPs: map[172.19.2.165:{}]
	I0229 02:24:42.879772       1 main.go:227] handling current node
	I0229 02:24:42.879898       1 main.go:223] Handling node with IPs: map[172.19.5.202:{}]
	I0229 02:24:42.879987       1 main.go:250] Node multinode-314500-m02 has CIDR [10.244.1.0/24] 
	I0229 02:24:52.894108       1 main.go:223] Handling node with IPs: map[172.19.2.165:{}]
	I0229 02:24:52.894237       1 main.go:227] handling current node
	I0229 02:24:52.894253       1 main.go:223] Handling node with IPs: map[172.19.5.202:{}]
	I0229 02:24:52.894261       1 main.go:250] Node multinode-314500-m02 has CIDR [10.244.1.0/24] 
	I0229 02:25:02.901456       1 main.go:223] Handling node with IPs: map[172.19.2.165:{}]
	I0229 02:25:02.901653       1 main.go:227] handling current node
	I0229 02:25:02.901669       1 main.go:223] Handling node with IPs: map[172.19.5.202:{}]
	I0229 02:25:02.901677       1 main.go:250] Node multinode-314500-m02 has CIDR [10.244.1.0/24] 
	I0229 02:25:12.908304       1 main.go:223] Handling node with IPs: map[172.19.2.165:{}]
	I0229 02:25:12.908451       1 main.go:227] handling current node
	I0229 02:25:12.908467       1 main.go:223] Handling node with IPs: map[172.19.5.202:{}]
	I0229 02:25:12.908475       1 main.go:250] Node multinode-314500-m02 has CIDR [10.244.1.0/24] 
	I0229 02:25:22.923999       1 main.go:223] Handling node with IPs: map[172.19.2.165:{}]
	I0229 02:25:22.924126       1 main.go:227] handling current node
	I0229 02:25:22.924141       1 main.go:223] Handling node with IPs: map[172.19.5.202:{}]
	I0229 02:25:22.924150       1 main.go:250] Node multinode-314500-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [9815e253e1a0] <==
	I0229 02:15:48.203853       1 cache.go:39] Caches are synced for autoregister controller
	I0229 02:15:48.232330       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0229 02:15:48.232740       1 shared_informer.go:318] Caches are synced for configmaps
	I0229 02:15:48.234868       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0229 02:15:48.236962       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0229 02:15:48.238608       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0229 02:15:48.238634       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0229 02:15:48.240130       1 controller.go:624] quota admission added evaluator for: namespaces
	I0229 02:15:48.259371       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0229 02:15:48.288795       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0229 02:15:49.050665       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0229 02:15:49.064719       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0229 02:15:49.064738       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0229 02:15:49.909107       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0229 02:15:49.978633       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0229 02:15:50.069966       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0229 02:15:50.082357       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.19.2.165]
	I0229 02:15:50.083992       1 controller.go:624] quota admission added evaluator for: endpoints
	I0229 02:15:50.090388       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0229 02:15:50.155063       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0229 02:15:51.998918       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0229 02:15:52.011885       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0229 02:15:52.026788       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0229 02:16:05.076718       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0229 02:16:05.263867       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [26b1ab05f99a] <==
	I0229 02:16:05.737501       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="69.104µs"
	I0229 02:16:16.382507       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="902.949µs"
	I0229 02:16:16.409455       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="67.604µs"
	I0229 02:16:17.774033       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="155.809µs"
	I0229 02:16:17.862409       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="36.897ms"
	I0229 02:16:17.868791       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="83.404µs"
	I0229 02:16:19.467304       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0229 02:18:53.354208       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-314500-m02\" does not exist"
	I0229 02:18:53.368926       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-314500-m02" podCIDRs=["10.244.1.0/24"]
	I0229 02:18:53.372475       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-4gbrl"
	I0229 02:18:53.376875       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-6r7b8"
	I0229 02:18:54.492680       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-314500-m02"
	I0229 02:18:54.493161       1 event.go:307] "Event occurred" object="multinode-314500-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-314500-m02 event: Registered Node multinode-314500-m02 in Controller"
	I0229 02:19:09.849595       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-314500-m02"
	I0229 02:19:34.656812       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5b5d89c9d6 to 2"
	I0229 02:19:34.678854       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-826w2"
	I0229 02:19:34.689390       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-qcblm"
	I0229 02:19:34.698278       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="40.961829ms"
	I0229 02:19:34.725163       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="26.446345ms"
	I0229 02:19:34.739405       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="13.836452ms"
	I0229 02:19:34.740025       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="46.602µs"
	I0229 02:19:36.713325       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="8.816271ms"
	I0229 02:19:36.713610       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="108.606µs"
	I0229 02:19:37.478878       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="9.961832ms"
	I0229 02:19:37.479378       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="145.408µs"
	
	
	==> kube-proxy [c93e33130746] <==
	I0229 02:16:07.488822       1 server_others.go:69] "Using iptables proxy"
	I0229 02:16:07.511408       1 node.go:141] Successfully retrieved node IP: 172.19.2.165
	I0229 02:16:07.646052       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0229 02:16:07.646080       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0229 02:16:07.652114       1 server_others.go:152] "Using iptables Proxier"
	I0229 02:16:07.652346       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0229 02:16:07.652698       1 server.go:846] "Version info" version="v1.28.4"
	I0229 02:16:07.652712       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 02:16:07.654751       1 config.go:188] "Starting service config controller"
	I0229 02:16:07.655126       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0229 02:16:07.655241       1 config.go:97] "Starting endpoint slice config controller"
	I0229 02:16:07.655327       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0229 02:16:07.656324       1 config.go:315] "Starting node config controller"
	I0229 02:16:07.676099       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0229 02:16:07.679653       1 shared_informer.go:318] Caches are synced for node config
	I0229 02:16:07.757691       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0229 02:16:07.757737       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [ab0c4864aee5] <==
	W0229 02:15:48.237220       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0229 02:15:48.237295       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0229 02:15:49.044071       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0229 02:15:49.044214       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0229 02:15:49.085996       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0229 02:15:49.086626       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0229 02:15:49.106158       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0229 02:15:49.106848       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0229 02:15:49.126181       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0229 02:15:49.126580       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0229 02:15:49.196878       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0229 02:15:49.196987       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0229 02:15:49.236282       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0229 02:15:49.236658       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0229 02:15:49.372072       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0229 02:15:49.372116       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0229 02:15:49.403666       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0229 02:15:49.403942       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0229 02:15:49.418593       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0229 02:15:49.418838       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0229 02:15:49.492335       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0229 02:15:49.492758       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0229 02:15:49.585577       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0229 02:15:49.585986       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0229 02:15:52.113114       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 29 02:20:52 multinode-314500 kubelet[2651]: E0229 02:20:52.341469    2651 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 02:20:52 multinode-314500 kubelet[2651]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 02:20:52 multinode-314500 kubelet[2651]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 02:20:52 multinode-314500 kubelet[2651]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 02:20:52 multinode-314500 kubelet[2651]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 02:21:52 multinode-314500 kubelet[2651]: E0229 02:21:52.340999    2651 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 02:21:52 multinode-314500 kubelet[2651]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 02:21:52 multinode-314500 kubelet[2651]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 02:21:52 multinode-314500 kubelet[2651]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 02:21:52 multinode-314500 kubelet[2651]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 02:22:52 multinode-314500 kubelet[2651]: E0229 02:22:52.343746    2651 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 02:22:52 multinode-314500 kubelet[2651]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 02:22:52 multinode-314500 kubelet[2651]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 02:22:52 multinode-314500 kubelet[2651]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 02:22:52 multinode-314500 kubelet[2651]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 02:23:52 multinode-314500 kubelet[2651]: E0229 02:23:52.340668    2651 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 02:23:52 multinode-314500 kubelet[2651]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 02:23:52 multinode-314500 kubelet[2651]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 02:23:52 multinode-314500 kubelet[2651]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 02:23:52 multinode-314500 kubelet[2651]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 02:24:52 multinode-314500 kubelet[2651]: E0229 02:24:52.342171    2651 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 02:24:52 multinode-314500 kubelet[2651]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 02:24:52 multinode-314500 kubelet[2651]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 02:24:52 multinode-314500 kubelet[2651]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 02:24:52 multinode-314500 kubelet[2651]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 02:25:21.258161   11272 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-314500 -n multinode-314500
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-314500 -n multinode-314500: (11.3681033s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-314500 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/CopyFile FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/CopyFile (65.18s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (508.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-314500
multinode_test.go:318: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-314500
multinode_test.go:318: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-314500: (1m23.2565898s)
multinode_test.go:323: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-314500 --wait=true -v=8 --alsologtostderr
E0229 02:34:12.104073    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
E0229 02:34:28.820522    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
multinode_test.go:323: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-314500 --wait=true -v=8 --alsologtostderr: (6m31.7128853s)
multinode_test.go:328: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-314500
multinode_test.go:335: reported node list is not the same after restart. Before restart: multinode-314500	172.19.2.165
multinode-314500-m02	172.19.5.202
multinode-314500-m03	172.19.5.92

                                                
                                                
After restart: multinode-314500	172.19.2.252
multinode-314500-m02	172.19.4.42
multinode-314500-m03	172.19.1.210
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-314500 -n multinode-314500
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-314500 -n multinode-314500: (11.3806686s)
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-314500 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-314500 logs -n 25: (8.4189665s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| delete  | -p mount-start-2-141600                           | mount-start-2-141600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:12 UTC | 29 Feb 24 02:12 UTC |
	| delete  | -p mount-start-1-141600                           | mount-start-1-141600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:12 UTC | 29 Feb 24 02:12 UTC |
	| start   | -p multinode-314500                               | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:13 UTC | 29 Feb 24 02:19 UTC |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- apply -f                   | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- rollout                    | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- get pods -o                | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- get pods -o                | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- exec                       | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | busybox-5b5d89c9d6-826w2 --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- exec                       | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | busybox-5b5d89c9d6-qcblm --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- exec                       | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | busybox-5b5d89c9d6-826w2 --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- exec                       | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | busybox-5b5d89c9d6-qcblm --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- exec                       | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | busybox-5b5d89c9d6-826w2 -- nslookup              |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- exec                       | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | busybox-5b5d89c9d6-qcblm -- nslookup              |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- get pods -o                | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- exec                       | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | busybox-5b5d89c9d6-826w2                          |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- exec                       | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC |                     |
	|         | busybox-5b5d89c9d6-826w2 -- sh                    |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.19.0.1                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- exec                       | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | busybox-5b5d89c9d6-qcblm                          |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- exec                       | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC |                     |
	|         | busybox-5b5d89c9d6-qcblm -- sh                    |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.19.0.1                           |                      |                   |         |                     |                     |
	| node    | add -p multinode-314500 -v 3                      | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:20 UTC |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	| node    | multinode-314500 node stop m03                    | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:25 UTC | 29 Feb 24 02:26 UTC |
	| node    | multinode-314500 node start                       | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:26 UTC | 29 Feb 24 02:29 UTC |
	|         | m03 --alsologtostderr                             |                      |                   |         |                     |                     |
	| node    | list -p multinode-314500                          | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:29 UTC |                     |
	| stop    | -p multinode-314500                               | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:29 UTC | 29 Feb 24 02:31 UTC |
	| start   | -p multinode-314500                               | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:31 UTC | 29 Feb 24 02:37 UTC |
	|         | --wait=true -v=8                                  |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	| node    | list -p multinode-314500                          | multinode-314500     | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:37 UTC |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 02:31:06
	Running on machine: minikube5
	Binary: Built with gc go1.22.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 02:31:06.661202    8616 out.go:291] Setting OutFile to fd 1432 ...
	I0229 02:31:06.661947    8616 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:31:06.661947    8616 out.go:304] Setting ErrFile to fd 1372...
	I0229 02:31:06.661947    8616 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:31:06.683123    8616 out.go:298] Setting JSON to false
	I0229 02:31:06.685674    8616 start.go:129] hostinfo: {"hostname":"minikube5","uptime":270093,"bootTime":1708903773,"procs":199,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0229 02:31:06.685674    8616 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 02:31:06.717356    8616 out.go:177] * [multinode-314500] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0229 02:31:06.718390    8616 notify.go:220] Checking for updates...
	I0229 02:31:06.718552    8616 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 02:31:06.719539    8616 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 02:31:06.758787    8616 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0229 02:31:06.760492    8616 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 02:31:06.806128    8616 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 02:31:06.811540    8616 config.go:182] Loaded profile config "multinode-314500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 02:31:06.811881    8616 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 02:31:11.885923    8616 out.go:177] * Using the hyperv driver based on existing profile
	I0229 02:31:11.886571    8616 start.go:299] selected driver: hyperv
	I0229 02:31:11.886571    8616 start.go:903] validating driver "hyperv" against &{Name:multinode-314500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:multinode-314500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.19.2.165 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.5.202 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.19.5.92 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:
false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:31:11.886780    8616 start.go:914] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 02:31:11.931779    8616 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 02:31:11.931779    8616 cni.go:84] Creating CNI manager for ""
	I0229 02:31:11.931779    8616 cni.go:136] 3 nodes found, recommending kindnet
	I0229 02:31:11.931779    8616 start_flags.go:323] config:
	{Name:multinode-314500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-314500 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.19.2.165 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.5.202 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.19.5.92 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-p
rovisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:31:11.931779    8616 iso.go:125] acquiring lock: {Name:mk91f2ee29fbed5605669750e8cfa308a1229357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:31:11.932907    8616 out.go:177] * Starting control plane node multinode-314500 in cluster multinode-314500
	I0229 02:31:11.934280    8616 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 02:31:11.934463    8616 preload.go:148] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0229 02:31:11.934505    8616 cache.go:56] Caching tarball of preloaded images
	I0229 02:31:11.934862    8616 preload.go:174] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 02:31:11.935011    8616 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0229 02:31:11.935335    8616 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\config.json ...
	I0229 02:31:11.937570    8616 start.go:365] acquiring machines lock for multinode-314500: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 02:31:11.937570    8616 start.go:369] acquired machines lock for "multinode-314500" in 0s
	I0229 02:31:11.937570    8616 start.go:96] Skipping create...Using existing machine configuration
	I0229 02:31:11.937570    8616 fix.go:54] fixHost starting: 
	I0229 02:31:11.938478    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:31:14.522274    8616 main.go:141] libmachine: [stdout =====>] : Off
	
	I0229 02:31:14.522588    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:31:14.522588    8616 fix.go:102] recreateIfNeeded on multinode-314500: state=Stopped err=<nil>
	W0229 02:31:14.522663    8616 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 02:31:14.523565    8616 out.go:177] * Restarting existing hyperv VM for "multinode-314500" ...
	I0229 02:31:14.523930    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-314500
	I0229 02:31:17.296076    8616 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:31:17.296076    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:31:17.296076    8616 main.go:141] libmachine: Waiting for host to start...
	I0229 02:31:17.296076    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:31:19.426984    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:31:19.427150    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:31:19.427229    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:31:21.741897    8616 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:31:21.741897    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:31:22.756180    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:31:24.760419    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:31:24.761233    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:31:24.761316    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:31:27.060661    8616 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:31:27.060684    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:31:28.067555    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:31:30.084772    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:31:30.084819    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:31:30.084853    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:31:32.402991    8616 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:31:32.402991    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:31:33.410854    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:31:35.441327    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:31:35.441392    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:31:35.441454    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:31:37.784583    8616 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:31:37.785328    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:31:38.799624    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:31:40.832298    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:31:40.832298    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:31:40.832376    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:31:43.274174    8616 main.go:141] libmachine: [stdout =====>] : 172.19.2.252
	
	I0229 02:31:43.274174    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:31:43.278169    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:31:45.290443    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:31:45.290443    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:31:45.290443    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:31:47.693888    8616 main.go:141] libmachine: [stdout =====>] : 172.19.2.252
	
	I0229 02:31:47.693964    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:31:47.694124    8616 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\config.json ...
	I0229 02:31:47.696055    8616 machine.go:88] provisioning docker machine ...
	I0229 02:31:47.696055    8616 buildroot.go:166] provisioning hostname "multinode-314500"
	I0229 02:31:47.696055    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:31:49.699042    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:31:49.699118    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:31:49.699118    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:31:52.087456    8616 main.go:141] libmachine: [stdout =====>] : 172.19.2.252
	
	I0229 02:31:52.087908    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:31:52.091993    8616 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:52.092596    8616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.2.252 22 <nil> <nil>}
	I0229 02:31:52.092675    8616 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-314500 && echo "multinode-314500" | sudo tee /etc/hostname
	I0229 02:31:52.258263    8616 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-314500
	
	I0229 02:31:52.258263    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:31:54.253213    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:31:54.253213    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:31:54.254112    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:31:56.648383    8616 main.go:141] libmachine: [stdout =====>] : 172.19.2.252
	
	I0229 02:31:56.648383    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:31:56.652440    8616 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:56.653108    8616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.2.252 22 <nil> <nil>}
	I0229 02:31:56.653108    8616 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-314500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-314500/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-314500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:31:56.810112    8616 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:31:56.810178    8616 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0229 02:31:56.810225    8616 buildroot.go:174] setting up certificates
	I0229 02:31:56.810282    8616 provision.go:83] configureAuth start
	I0229 02:31:56.810324    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:31:58.801992    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:31:58.802069    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:31:58.802141    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:32:01.172673    8616 main.go:141] libmachine: [stdout =====>] : 172.19.2.252
	
	I0229 02:32:01.172673    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:32:01.172673    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:32:03.165212    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:32:03.165447    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:32:03.165447    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:32:05.571082    8616 main.go:141] libmachine: [stdout =====>] : 172.19.2.252
	
	I0229 02:32:05.571082    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:32:05.571163    8616 provision.go:138] copyHostCerts
	I0229 02:32:05.571297    8616 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0229 02:32:05.571297    8616 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0229 02:32:05.571297    8616 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0229 02:32:05.571991    8616 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0229 02:32:05.572938    8616 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0229 02:32:05.573150    8616 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0229 02:32:05.573150    8616 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0229 02:32:05.573150    8616 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0229 02:32:05.574311    8616 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0229 02:32:05.574311    8616 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0229 02:32:05.574311    8616 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0229 02:32:05.574311    8616 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1675 bytes)
	I0229 02:32:05.575312    8616 provision.go:112] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-314500 san=[172.19.2.252 172.19.2.252 localhost 127.0.0.1 minikube multinode-314500]
	I0229 02:32:05.794146    8616 provision.go:172] copyRemoteCerts
	I0229 02:32:05.803364    8616 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:32:05.803535    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:32:07.807220    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:32:07.807220    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:32:07.807607    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:32:10.233360    8616 main.go:141] libmachine: [stdout =====>] : 172.19.2.252
	
	I0229 02:32:10.233360    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:32:10.233556    8616 sshutil.go:53] new ssh client: &{IP:172.19.2.252 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\id_rsa Username:docker}
	I0229 02:32:10.339218    8616 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5355242s)
	I0229 02:32:10.339218    8616 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0229 02:32:10.339218    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 02:32:10.390041    8616 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0229 02:32:10.391439    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 02:32:10.449005    8616 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0229 02:32:10.450008    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1224 bytes)
	I0229 02:32:10.495374    8616 provision.go:86] duration metric: configureAuth took 13.6843268s
	I0229 02:32:10.495374    8616 buildroot.go:189] setting minikube options for container-runtime
	I0229 02:32:10.496274    8616 config.go:182] Loaded profile config "multinode-314500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 02:32:10.496438    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:32:12.492769    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:32:12.492889    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:32:12.493013    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:32:14.860247    8616 main.go:141] libmachine: [stdout =====>] : 172.19.2.252
	
	I0229 02:32:14.860620    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:32:14.865985    8616 main.go:141] libmachine: Using SSH client type: native
	I0229 02:32:14.866509    8616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.2.252 22 <nil> <nil>}
	I0229 02:32:14.866509    8616 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 02:32:15.003877    8616 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0229 02:32:15.004008    8616 buildroot.go:70] root file system type: tmpfs
	I0229 02:32:15.004230    8616 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 02:32:15.004230    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:32:17.019553    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:32:17.019553    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:32:17.020001    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:32:19.452930    8616 main.go:141] libmachine: [stdout =====>] : 172.19.2.252
	
	I0229 02:32:19.452930    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:32:19.458333    8616 main.go:141] libmachine: Using SSH client type: native
	I0229 02:32:19.458972    8616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.2.252 22 <nil> <nil>}
	I0229 02:32:19.458972    8616 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 02:32:19.634003    8616 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 02:32:19.634622    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:32:21.645803    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:32:21.645803    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:32:21.646543    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:32:24.028370    8616 main.go:141] libmachine: [stdout =====>] : 172.19.2.252
	
	I0229 02:32:24.029089    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:32:24.032846    8616 main.go:141] libmachine: Using SSH client type: native
	I0229 02:32:24.033417    8616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.2.252 22 <nil> <nil>}
	I0229 02:32:24.033417    8616 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 02:32:25.352161    8616 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0229 02:32:25.352161    8616 machine.go:91] provisioned docker machine in 37.6540009s
	I0229 02:32:25.352161    8616 start.go:300] post-start starting for "multinode-314500" (driver="hyperv")
	I0229 02:32:25.352338    8616 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:32:25.365951    8616 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:32:25.365951    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:32:27.348036    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:32:27.348099    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:32:27.348099    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:32:29.798400    8616 main.go:141] libmachine: [stdout =====>] : 172.19.2.252
	
	I0229 02:32:29.798400    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:32:29.798930    8616 sshutil.go:53] new ssh client: &{IP:172.19.2.252 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\id_rsa Username:docker}
	I0229 02:32:29.913762    8616 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5474491s)
	I0229 02:32:29.922775    8616 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:32:29.929671    8616 command_runner.go:130] > NAME=Buildroot
	I0229 02:32:29.929671    8616 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0229 02:32:29.929671    8616 command_runner.go:130] > ID=buildroot
	I0229 02:32:29.929671    8616 command_runner.go:130] > VERSION_ID=2023.02.9
	I0229 02:32:29.929671    8616 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0229 02:32:29.929671    8616 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 02:32:29.929671    8616 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0229 02:32:29.929671    8616 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0229 02:32:29.930637    8616 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem -> 33122.pem in /etc/ssl/certs
	I0229 02:32:29.930637    8616 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem -> /etc/ssl/certs/33122.pem
	I0229 02:32:29.940180    8616 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 02:32:29.962844    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem --> /etc/ssl/certs/33122.pem (1708 bytes)
	I0229 02:32:30.012170    8616 start.go:303] post-start completed in 4.6597482s
	I0229 02:32:30.012170    8616 fix.go:56] fixHost completed within 1m18.0702357s
	I0229 02:32:30.012276    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:32:32.016489    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:32:32.017454    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:32:32.017548    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:32:34.443803    8616 main.go:141] libmachine: [stdout =====>] : 172.19.2.252
	
	I0229 02:32:34.444582    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:32:34.450411    8616 main.go:141] libmachine: Using SSH client type: native
	I0229 02:32:34.451026    8616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.2.252 22 <nil> <nil>}
	I0229 02:32:34.451026    8616 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 02:32:34.603258    8616 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709173954.752370158
	
	I0229 02:32:34.603258    8616 fix.go:206] guest clock: 1709173954.752370158
	I0229 02:32:34.603337    8616 fix.go:219] Guest: 2024-02-29 02:32:34.752370158 +0000 UTC Remote: 2024-02-29 02:32:30.0121703 +0000 UTC m=+83.500770701 (delta=4.740199858s)
	I0229 02:32:34.603469    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:32:36.605892    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:32:36.606457    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:32:36.606550    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:32:39.031630    8616 main.go:141] libmachine: [stdout =====>] : 172.19.2.252
	
	I0229 02:32:39.031630    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:32:39.035913    8616 main.go:141] libmachine: Using SSH client type: native
	I0229 02:32:39.036511    8616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.2.252 22 <nil> <nil>}
	I0229 02:32:39.036511    8616 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709173954
	I0229 02:32:39.192274    8616 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Feb 29 02:32:34 UTC 2024
	
	I0229 02:32:39.192274    8616 fix.go:226] clock set: Thu Feb 29 02:32:34 UTC 2024
	 (err=<nil>)
	I0229 02:32:39.192274    8616 start.go:83] releasing machines lock for "multinode-314500", held for 1m27.2498269s
	I0229 02:32:39.192905    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:32:41.212244    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:32:41.212244    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:32:41.212244    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:32:43.594565    8616 main.go:141] libmachine: [stdout =====>] : 172.19.2.252
	
	I0229 02:32:43.594565    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:32:43.599452    8616 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:32:43.599866    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:32:43.605695    8616 ssh_runner.go:195] Run: cat /version.json
	I0229 02:32:43.605695    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:32:45.636689    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:32:45.643876    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:32:45.643876    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:32:45.647510    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:32:45.647510    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:32:45.647656    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:32:48.091549    8616 main.go:141] libmachine: [stdout =====>] : 172.19.2.252
	
	I0229 02:32:48.091549    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:32:48.091549    8616 sshutil.go:53] new ssh client: &{IP:172.19.2.252 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\id_rsa Username:docker}
	I0229 02:32:48.112708    8616 main.go:141] libmachine: [stdout =====>] : 172.19.2.252
	
	I0229 02:32:48.112708    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:32:48.113143    8616 sshutil.go:53] new ssh client: &{IP:172.19.2.252 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\id_rsa Username:docker}
	I0229 02:32:48.290755    8616 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0229 02:32:48.290755    8616 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.6907665s)
	I0229 02:32:48.290755    8616 command_runner.go:130] > {"iso_version": "v1.32.1-1708638130-18020", "kicbase_version": "v0.0.42-1708008208-17936", "minikube_version": "v1.32.0", "commit": "d80143d2abd5a004b09b48bbc118a104326900af"}
	I0229 02:32:48.290755    8616 ssh_runner.go:235] Completed: cat /version.json: (4.684798s)
	I0229 02:32:48.301272    8616 ssh_runner.go:195] Run: systemctl --version
	I0229 02:32:48.310417    8616 command_runner.go:130] > systemd 252 (252)
	I0229 02:32:48.310417    8616 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0229 02:32:48.319137    8616 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0229 02:32:48.328645    8616 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0229 02:32:48.329406    8616 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 02:32:48.340156    8616 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:32:48.370012    8616 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0229 02:32:48.370430    8616 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 02:32:48.370473    8616 start.go:475] detecting cgroup driver to use...
	I0229 02:32:48.370694    8616 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:32:48.408129    8616 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0229 02:32:48.422715    8616 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0229 02:32:48.455047    8616 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 02:32:48.475851    8616 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 02:32:48.484511    8616 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 02:32:48.521129    8616 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 02:32:48.562161    8616 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 02:32:48.590093    8616 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 02:32:48.618785    8616 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:32:48.647808    8616 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 02:32:48.674666    8616 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:32:48.694021    8616 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0229 02:32:48.704572    8616 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:32:48.732614    8616 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:32:48.931935    8616 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 02:32:48.962756    8616 start.go:475] detecting cgroup driver to use...
	I0229 02:32:48.973434    8616 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 02:32:48.994571    8616 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0229 02:32:48.994571    8616 command_runner.go:130] > [Unit]
	I0229 02:32:48.994571    8616 command_runner.go:130] > Description=Docker Application Container Engine
	I0229 02:32:48.994571    8616 command_runner.go:130] > Documentation=https://docs.docker.com
	I0229 02:32:48.994571    8616 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0229 02:32:48.994571    8616 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0229 02:32:48.994571    8616 command_runner.go:130] > StartLimitBurst=3
	I0229 02:32:48.994571    8616 command_runner.go:130] > StartLimitIntervalSec=60
	I0229 02:32:48.994571    8616 command_runner.go:130] > [Service]
	I0229 02:32:48.994571    8616 command_runner.go:130] > Type=notify
	I0229 02:32:48.994571    8616 command_runner.go:130] > Restart=on-failure
	I0229 02:32:48.994571    8616 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0229 02:32:48.994571    8616 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0229 02:32:48.994571    8616 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0229 02:32:48.994571    8616 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0229 02:32:48.994571    8616 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0229 02:32:48.994571    8616 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0229 02:32:48.994571    8616 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0229 02:32:48.995780    8616 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0229 02:32:48.995780    8616 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0229 02:32:48.995780    8616 command_runner.go:130] > ExecStart=
	I0229 02:32:48.995780    8616 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0229 02:32:48.995780    8616 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0229 02:32:48.995780    8616 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0229 02:32:48.995905    8616 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0229 02:32:48.995905    8616 command_runner.go:130] > LimitNOFILE=infinity
	I0229 02:32:48.995905    8616 command_runner.go:130] > LimitNPROC=infinity
	I0229 02:32:48.995905    8616 command_runner.go:130] > LimitCORE=infinity
	I0229 02:32:48.995905    8616 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0229 02:32:48.995905    8616 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0229 02:32:48.995905    8616 command_runner.go:130] > TasksMax=infinity
	I0229 02:32:48.995905    8616 command_runner.go:130] > TimeoutStartSec=0
	I0229 02:32:48.995905    8616 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0229 02:32:48.995905    8616 command_runner.go:130] > Delegate=yes
	I0229 02:32:48.995905    8616 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0229 02:32:48.995905    8616 command_runner.go:130] > KillMode=process
	I0229 02:32:48.995905    8616 command_runner.go:130] > [Install]
	I0229 02:32:48.996049    8616 command_runner.go:130] > WantedBy=multi-user.target
	I0229 02:32:49.005205    8616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:32:49.037001    8616 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 02:32:49.072285    8616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:32:49.106008    8616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 02:32:49.142434    8616 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 02:32:49.195871    8616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 02:32:49.221343    8616 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:32:49.258181    8616 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0229 02:32:49.268860    8616 ssh_runner.go:195] Run: which cri-dockerd
	I0229 02:32:49.275092    8616 command_runner.go:130] > /usr/bin/cri-dockerd
	I0229 02:32:49.284463    8616 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 02:32:49.302391    8616 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0229 02:32:49.342685    8616 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 02:32:49.539822    8616 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 02:32:49.734859    8616 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 02:32:49.734859    8616 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 02:32:49.775757    8616 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:32:49.975338    8616 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 02:32:51.672947    8616 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.6974357s)
	I0229 02:32:51.683109    8616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0229 02:32:51.719672    8616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 02:32:51.754169    8616 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0229 02:32:51.956140    8616 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0229 02:32:52.159743    8616 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:32:52.354231    8616 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0229 02:32:52.396825    8616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 02:32:52.430389    8616 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:32:52.627450    8616 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0229 02:32:52.738953    8616 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0229 02:32:52.747799    8616 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0229 02:32:52.756346    8616 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0229 02:32:52.756346    8616 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0229 02:32:52.756346    8616 command_runner.go:130] > Device: 0,22	Inode: 858         Links: 1
	I0229 02:32:52.756684    8616 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0229 02:32:52.756684    8616 command_runner.go:130] > Access: 2024-02-29 02:32:52.818094063 +0000
	I0229 02:32:52.756740    8616 command_runner.go:130] > Modify: 2024-02-29 02:32:52.818094063 +0000
	I0229 02:32:52.756740    8616 command_runner.go:130] > Change: 2024-02-29 02:32:52.822093467 +0000
	I0229 02:32:52.756740    8616 command_runner.go:130] >  Birth: -
	I0229 02:32:52.756740    8616 start.go:543] Will wait 60s for crictl version
	I0229 02:32:52.765914    8616 ssh_runner.go:195] Run: which crictl
	I0229 02:32:52.770935    8616 command_runner.go:130] > /usr/bin/crictl
	I0229 02:32:52.779927    8616 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 02:32:52.854506    8616 command_runner.go:130] > Version:  0.1.0
	I0229 02:32:52.855513    8616 command_runner.go:130] > RuntimeName:  docker
	I0229 02:32:52.855513    8616 command_runner.go:130] > RuntimeVersion:  24.0.7
	I0229 02:32:52.855513    8616 command_runner.go:130] > RuntimeApiVersion:  v1
	I0229 02:32:52.855513    8616 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0229 02:32:52.863519    8616 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 02:32:52.899514    8616 command_runner.go:130] > 24.0.7
	I0229 02:32:52.910149    8616 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 02:32:52.942865    8616 command_runner.go:130] > 24.0.7
	I0229 02:32:52.944895    8616 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0229 02:32:52.945011    8616 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0229 02:32:52.948985    8616 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0229 02:32:52.948985    8616 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0229 02:32:52.948985    8616 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0229 02:32:52.948985    8616 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:a6:a3:c1 Flags:up|broadcast|multicast|running}
	I0229 02:32:52.951892    8616 ip.go:210] interface addr: fe80::fc78:4865:5cac:d448/64
	I0229 02:32:52.951892    8616 ip.go:210] interface addr: 172.19.0.1/20
	I0229 02:32:52.959899    8616 ssh_runner.go:195] Run: grep 172.19.0.1	host.minikube.internal$ /etc/hosts
	I0229 02:32:52.971185    8616 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:32:52.996910    8616 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 02:32:53.003900    8616 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 02:32:53.031211    8616 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0229 02:32:53.031301    8616 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0229 02:32:53.031301    8616 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0229 02:32:53.031301    8616 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0229 02:32:53.031301    8616 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0229 02:32:53.031377    8616 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0229 02:32:53.031405    8616 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0229 02:32:53.031405    8616 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0229 02:32:53.031405    8616 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:32:53.031405    8616 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0229 02:32:53.031542    8616 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0229 02:32:53.031619    8616 docker.go:615] Images already preloaded, skipping extraction
	I0229 02:32:53.038740    8616 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 02:32:53.065222    8616 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0229 02:32:53.065222    8616 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0229 02:32:53.065222    8616 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0229 02:32:53.065222    8616 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0229 02:32:53.065222    8616 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0229 02:32:53.065222    8616 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0229 02:32:53.065222    8616 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0229 02:32:53.065222    8616 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0229 02:32:53.065222    8616 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:32:53.065222    8616 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0229 02:32:53.065222    8616 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0229 02:32:53.065222    8616 cache_images.go:84] Images are preloaded, skipping loading
	I0229 02:32:53.074223    8616 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0229 02:32:53.113476    8616 command_runner.go:130] > cgroupfs
	I0229 02:32:53.114276    8616 cni.go:84] Creating CNI manager for ""
	I0229 02:32:53.114407    8616 cni.go:136] 3 nodes found, recommending kindnet
	I0229 02:32:53.114407    8616 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 02:32:53.114407    8616 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.2.252 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-314500 NodeName:multinode-314500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.2.252"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.2.252 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 02:32:53.114407    8616 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.2.252
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-314500"
	  kubeletExtraArgs:
	    node-ip: 172.19.2.252
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.2.252"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 02:32:53.114407    8616 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-314500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.2.252
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-314500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 02:32:53.126403    8616 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 02:32:53.148099    8616 command_runner.go:130] > kubeadm
	I0229 02:32:53.148099    8616 command_runner.go:130] > kubectl
	I0229 02:32:53.148099    8616 command_runner.go:130] > kubelet
	I0229 02:32:53.148246    8616 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 02:32:53.157387    8616 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 02:32:53.174824    8616 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I0229 02:32:53.208300    8616 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 02:32:53.245652    8616 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0229 02:32:53.289001    8616 ssh_runner.go:195] Run: grep 172.19.2.252	control-plane.minikube.internal$ /etc/hosts
	I0229 02:32:53.295665    8616 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.2.252	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:32:53.317287    8616 certs.go:56] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500 for IP: 172.19.2.252
	I0229 02:32:53.317287    8616 certs.go:190] acquiring lock for shared ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:32:53.317287    8616 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0229 02:32:53.318250    8616 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0229 02:32:53.319249    8616 certs.go:315] skipping minikube-user signed cert generation: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\client.key
	I0229 02:32:53.319249    8616 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.key.cbb46ad0
	I0229 02:32:53.319249    8616 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.crt.cbb46ad0 with IP's: [172.19.2.252 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 02:32:53.857597    8616 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.crt.cbb46ad0 ...
	I0229 02:32:53.857597    8616 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.crt.cbb46ad0: {Name:mk2c79d4cbb7d5ea8294baf480b2b4cb9f5c51ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:32:53.858825    8616 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.key.cbb46ad0 ...
	I0229 02:32:53.858825    8616 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.key.cbb46ad0: {Name:mka09f9d4cec07c0d214289da2c97f23664bb2c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:32:53.859529    8616 certs.go:337] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.crt.cbb46ad0 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.crt
	I0229 02:32:53.870889    8616 certs.go:341] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.key.cbb46ad0 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.key
	I0229 02:32:53.871925    8616 certs.go:315] skipping aggregator signed cert generation: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\proxy-client.key
	I0229 02:32:53.871925    8616 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0229 02:32:53.872886    8616 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0229 02:32:53.873553    8616 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0229 02:32:53.873693    8616 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0229 02:32:53.873693    8616 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0229 02:32:53.873693    8616 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0229 02:32:53.873693    8616 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0229 02:32:53.873693    8616 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0229 02:32:53.874284    8616 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3312.pem (1338 bytes)
	W0229 02:32:53.874805    8616 certs.go:433] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3312_empty.pem, impossibly tiny 0 bytes
	I0229 02:32:53.874838    8616 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0229 02:32:53.874838    8616 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0229 02:32:53.874838    8616 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0229 02:32:53.875364    8616 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0229 02:32:53.875432    8616 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem (1708 bytes)
	I0229 02:32:53.876008    8616 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3312.pem -> /usr/share/ca-certificates/3312.pem
	I0229 02:32:53.876008    8616 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem -> /usr/share/ca-certificates/33122.pem
	I0229 02:32:53.876008    8616 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:32:53.877154    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 02:32:53.929544    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 02:32:53.979757    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 02:32:54.028347    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0229 02:32:54.076432    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 02:32:54.123214    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 02:32:54.173763    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 02:32:54.220276    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 02:32:54.266641    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3312.pem --> /usr/share/ca-certificates/3312.pem (1338 bytes)
	I0229 02:32:54.313660    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem --> /usr/share/ca-certificates/33122.pem (1708 bytes)
	I0229 02:32:54.360080    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 02:32:54.408406    8616 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 02:32:54.451673    8616 ssh_runner.go:195] Run: openssl version
	I0229 02:32:54.460863    8616 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0229 02:32:54.470481    8616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 02:32:54.499916    8616 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:32:54.511629    8616 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 29 00:45 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:32:54.511629    8616 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 00:45 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:32:54.520923    8616 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:32:54.530581    8616 command_runner.go:130] > b5213941
	I0229 02:32:54.539735    8616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 02:32:54.571462    8616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3312.pem && ln -fs /usr/share/ca-certificates/3312.pem /etc/ssl/certs/3312.pem"
	I0229 02:32:54.602462    8616 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3312.pem
	I0229 02:32:54.610794    8616 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 29 00:59 /usr/share/ca-certificates/3312.pem
	I0229 02:32:54.610794    8616 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 00:59 /usr/share/ca-certificates/3312.pem
	I0229 02:32:54.620358    8616 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3312.pem
	I0229 02:32:54.629270    8616 command_runner.go:130] > 51391683
	I0229 02:32:54.638210    8616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3312.pem /etc/ssl/certs/51391683.0"
	I0229 02:32:54.667813    8616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/33122.pem && ln -fs /usr/share/ca-certificates/33122.pem /etc/ssl/certs/33122.pem"
	I0229 02:32:54.697471    8616 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/33122.pem
	I0229 02:32:54.704009    8616 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 29 00:59 /usr/share/ca-certificates/33122.pem
	I0229 02:32:54.704009    8616 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 00:59 /usr/share/ca-certificates/33122.pem
	I0229 02:32:54.713763    8616 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/33122.pem
	I0229 02:32:54.722319    8616 command_runner.go:130] > 3ec20f2e
	I0229 02:32:54.731782    8616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/33122.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 02:32:54.763630    8616 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 02:32:54.771099    8616 command_runner.go:130] > ca.crt
	I0229 02:32:54.771099    8616 command_runner.go:130] > ca.key
	I0229 02:32:54.771099    8616 command_runner.go:130] > healthcheck-client.crt
	I0229 02:32:54.771099    8616 command_runner.go:130] > healthcheck-client.key
	I0229 02:32:54.771099    8616 command_runner.go:130] > peer.crt
	I0229 02:32:54.771099    8616 command_runner.go:130] > peer.key
	I0229 02:32:54.771099    8616 command_runner.go:130] > server.crt
	I0229 02:32:54.771099    8616 command_runner.go:130] > server.key
	I0229 02:32:54.780944    8616 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 02:32:54.790788    8616 command_runner.go:130] > Certificate will not expire
	I0229 02:32:54.800366    8616 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 02:32:54.810110    8616 command_runner.go:130] > Certificate will not expire
	I0229 02:32:54.820590    8616 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 02:32:54.830507    8616 command_runner.go:130] > Certificate will not expire
	I0229 02:32:54.839865    8616 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 02:32:54.849556    8616 command_runner.go:130] > Certificate will not expire
	I0229 02:32:54.858610    8616 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 02:32:54.869293    8616 command_runner.go:130] > Certificate will not expire
	I0229 02:32:54.878139    8616 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 02:32:54.887690    8616 command_runner.go:130] > Certificate will not expire
	I0229 02:32:54.887690    8616 kubeadm.go:404] StartCluster: {Name:multinode-314500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.28.4 ClusterName:multinode-314500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.19.2.252 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.5.202 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.19.5.92 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:32:54.895102    8616 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 02:32:54.941782    8616 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 02:32:54.961750    8616 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0229 02:32:54.961750    8616 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0229 02:32:54.961750    8616 command_runner.go:130] > /var/lib/minikube/etcd:
	I0229 02:32:54.961750    8616 command_runner.go:130] > member
	I0229 02:32:54.961750    8616 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 02:32:54.961750    8616 kubeadm.go:636] restartCluster start
	I0229 02:32:54.970277    8616 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 02:32:54.989566    8616 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:32:54.991059    8616 kubeconfig.go:135] verify returned: extract IP: "multinode-314500" does not appear in C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 02:32:54.991248    8616 kubeconfig.go:146] "multinode-314500" context is missing from C:\Users\jenkins.minikube5\minikube-integration\kubeconfig - will repair!
	I0229 02:32:54.992261    8616 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:32:55.005422    8616 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 02:32:55.006488    8616 kapi.go:59] client config for multinode-314500: &rest.Config{Host:"https://172.19.2.252:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500/client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500/client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:
[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2480600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 02:32:55.007827    8616 cert_rotation.go:137] Starting client certificate rotation controller
	I0229 02:32:55.018681    8616 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 02:32:55.037400    8616 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0229 02:32:55.037400    8616 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0229 02:32:55.037400    8616 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0229 02:32:55.037400    8616 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0229 02:32:55.037400    8616 command_runner.go:130] >  kind: InitConfiguration
	I0229 02:32:55.037400    8616 command_runner.go:130] >  localAPIEndpoint:
	I0229 02:32:55.037400    8616 command_runner.go:130] > -  advertiseAddress: 172.19.2.165
	I0229 02:32:55.037400    8616 command_runner.go:130] > +  advertiseAddress: 172.19.2.252
	I0229 02:32:55.037400    8616 command_runner.go:130] >    bindPort: 8443
	I0229 02:32:55.037400    8616 command_runner.go:130] >  bootstrapTokens:
	I0229 02:32:55.037400    8616 command_runner.go:130] >    - groups:
	I0229 02:32:55.037400    8616 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0229 02:32:55.037400    8616 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0229 02:32:55.037400    8616 command_runner.go:130] >    name: "multinode-314500"
	I0229 02:32:55.037400    8616 command_runner.go:130] >    kubeletExtraArgs:
	I0229 02:32:55.037400    8616 command_runner.go:130] > -    node-ip: 172.19.2.165
	I0229 02:32:55.037400    8616 command_runner.go:130] > +    node-ip: 172.19.2.252
	I0229 02:32:55.037400    8616 command_runner.go:130] >    taints: []
	I0229 02:32:55.037400    8616 command_runner.go:130] >  ---
	I0229 02:32:55.037400    8616 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0229 02:32:55.037400    8616 command_runner.go:130] >  kind: ClusterConfiguration
	I0229 02:32:55.037400    8616 command_runner.go:130] >  apiServer:
	I0229 02:32:55.037400    8616 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.19.2.165"]
	I0229 02:32:55.037927    8616 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.19.2.252"]
	I0229 02:32:55.037962    8616 command_runner.go:130] >    extraArgs:
	I0229 02:32:55.037985    8616 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0229 02:32:55.037985    8616 command_runner.go:130] >  controllerManager:
	I0229 02:32:55.038044    8616 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.19.2.165
	+  advertiseAddress: 172.19.2.252
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-314500"
	   kubeletExtraArgs:
	-    node-ip: 172.19.2.165
	+    node-ip: 172.19.2.252
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.19.2.165"]
	+  certSANs: ["127.0.0.1", "localhost", "172.19.2.252"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0229 02:32:55.038116    8616 kubeadm.go:1135] stopping kube-system containers ...
	I0229 02:32:55.049101    8616 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 02:32:55.078145    8616 command_runner.go:130] > 11c14ebdfaf6
	I0229 02:32:55.078145    8616 command_runner.go:130] > cf65b06d29a0
	I0229 02:32:55.078145    8616 command_runner.go:130] > 13f6ae46b7d0
	I0229 02:32:55.078145    8616 command_runner.go:130] > 8c944d91b625
	I0229 02:32:55.078145    8616 command_runner.go:130] > dd61788b0a0d
	I0229 02:32:55.078145    8616 command_runner.go:130] > c93e33130746
	I0229 02:32:55.078145    8616 command_runner.go:130] > 4b10f8bd940b
	I0229 02:32:55.078145    8616 command_runner.go:130] > edb41bd5e75d
	I0229 02:32:55.078145    8616 command_runner.go:130] > e5bc2b41493b
	I0229 02:32:55.078145    8616 command_runner.go:130] > ab0c4864aee5
	I0229 02:32:55.078145    8616 command_runner.go:130] > 26b1ab05f99a
	I0229 02:32:55.078145    8616 command_runner.go:130] > 9815e253e1a0
	I0229 02:32:55.078145    8616 command_runner.go:130] > bf7b9750ae9e
	I0229 02:32:55.078145    8616 command_runner.go:130] > 96810146c69c
	I0229 02:32:55.078145    8616 command_runner.go:130] > 2d13a46d8389
	I0229 02:32:55.078145    8616 command_runner.go:130] > b93004a3ca70
	I0229 02:32:55.078145    8616 docker.go:483] Stopping containers: [11c14ebdfaf6 cf65b06d29a0 13f6ae46b7d0 8c944d91b625 dd61788b0a0d c93e33130746 4b10f8bd940b edb41bd5e75d e5bc2b41493b ab0c4864aee5 26b1ab05f99a 9815e253e1a0 bf7b9750ae9e 96810146c69c 2d13a46d8389 b93004a3ca70]
	I0229 02:32:55.087427    8616 ssh_runner.go:195] Run: docker stop 11c14ebdfaf6 cf65b06d29a0 13f6ae46b7d0 8c944d91b625 dd61788b0a0d c93e33130746 4b10f8bd940b edb41bd5e75d e5bc2b41493b ab0c4864aee5 26b1ab05f99a 9815e253e1a0 bf7b9750ae9e 96810146c69c 2d13a46d8389 b93004a3ca70
	I0229 02:32:55.114824    8616 command_runner.go:130] > 11c14ebdfaf6
	I0229 02:32:55.114824    8616 command_runner.go:130] > cf65b06d29a0
	I0229 02:32:55.114824    8616 command_runner.go:130] > 13f6ae46b7d0
	I0229 02:32:55.114824    8616 command_runner.go:130] > 8c944d91b625
	I0229 02:32:55.114824    8616 command_runner.go:130] > dd61788b0a0d
	I0229 02:32:55.114824    8616 command_runner.go:130] > c93e33130746
	I0229 02:32:55.114824    8616 command_runner.go:130] > 4b10f8bd940b
	I0229 02:32:55.114824    8616 command_runner.go:130] > edb41bd5e75d
	I0229 02:32:55.114824    8616 command_runner.go:130] > e5bc2b41493b
	I0229 02:32:55.114824    8616 command_runner.go:130] > ab0c4864aee5
	I0229 02:32:55.114824    8616 command_runner.go:130] > 26b1ab05f99a
	I0229 02:32:55.114824    8616 command_runner.go:130] > 9815e253e1a0
	I0229 02:32:55.114824    8616 command_runner.go:130] > bf7b9750ae9e
	I0229 02:32:55.114824    8616 command_runner.go:130] > 96810146c69c
	I0229 02:32:55.114824    8616 command_runner.go:130] > 2d13a46d8389
	I0229 02:32:55.114824    8616 command_runner.go:130] > b93004a3ca70
	I0229 02:32:55.124906    8616 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 02:32:55.168832    8616 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:32:55.187191    8616 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0229 02:32:55.187191    8616 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0229 02:32:55.187191    8616 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0229 02:32:55.187191    8616 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:32:55.187191    8616 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:32:55.196718    8616 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:32:55.214952    8616 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 02:32:55.214952    8616 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:32:55.536773    8616 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:32:55.536773    8616 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0229 02:32:55.536773    8616 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0229 02:32:55.536773    8616 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 02:32:55.536773    8616 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0229 02:32:55.536773    8616 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0229 02:32:55.536773    8616 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0229 02:32:55.536773    8616 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0229 02:32:55.536773    8616 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0229 02:32:55.536773    8616 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 02:32:55.536773    8616 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 02:32:55.536773    8616 command_runner.go:130] > [certs] Using the existing "sa" key
	I0229 02:32:55.536773    8616 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:32:55.618400    8616 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:32:56.013806    8616 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:32:56.259229    8616 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:32:56.360463    8616 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:32:56.628368    8616 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:32:56.631735    8616 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.0949009s)
	I0229 02:32:56.631735    8616 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:32:56.933406    8616 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:32:56.933462    8616 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:32:56.933462    8616 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0229 02:32:56.933462    8616 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:32:57.028438    8616 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:32:57.028666    8616 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:32:57.028666    8616 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:32:57.028756    8616 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:32:57.028846    8616 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:32:57.111263    8616 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:32:57.112689    8616 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:32:57.123624    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:57.627066    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:58.133641    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:58.632850    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:59.126499    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:59.157177    8616 command_runner.go:130] > 1889
	I0229 02:32:59.157177    8616 api_server.go:72] duration metric: took 2.045799s to wait for apiserver process to appear ...
	I0229 02:32:59.157177    8616 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:32:59.157177    8616 api_server.go:253] Checking apiserver healthz at https://172.19.2.252:8443/healthz ...
	I0229 02:33:02.373645    8616 api_server.go:279] https://172.19.2.252:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 02:33:02.373645    8616 api_server.go:103] status: https://172.19.2.252:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 02:33:02.373645    8616 api_server.go:253] Checking apiserver healthz at https://172.19.2.252:8443/healthz ...
	I0229 02:33:02.405092    8616 api_server.go:279] https://172.19.2.252:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 02:33:02.405092    8616 api_server.go:103] status: https://172.19.2.252:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 02:33:02.666183    8616 api_server.go:253] Checking apiserver healthz at https://172.19.2.252:8443/healthz ...
	I0229 02:33:02.679744    8616 api_server.go:279] https://172.19.2.252:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:33:02.679744    8616 api_server.go:103] status: https://172.19.2.252:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:33:03.158138    8616 api_server.go:253] Checking apiserver healthz at https://172.19.2.252:8443/healthz ...
	I0229 02:33:03.166913    8616 api_server.go:279] https://172.19.2.252:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:33:03.166913    8616 api_server.go:103] status: https://172.19.2.252:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:33:03.667595    8616 api_server.go:253] Checking apiserver healthz at https://172.19.2.252:8443/healthz ...
	I0229 02:33:03.681719    8616 api_server.go:279] https://172.19.2.252:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:33:03.681719    8616 api_server.go:103] status: https://172.19.2.252:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:33:04.161162    8616 api_server.go:253] Checking apiserver healthz at https://172.19.2.252:8443/healthz ...
	I0229 02:33:04.169007    8616 api_server.go:279] https://172.19.2.252:8443/healthz returned 200:
	ok
	I0229 02:33:04.169493    8616 round_trippers.go:463] GET https://172.19.2.252:8443/version
	I0229 02:33:04.169493    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:04.169493    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:04.169493    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:04.185339    8616 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0229 02:33:04.185947    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:04.185947    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:04.185947    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:04.185947    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:04.185947    8616 round_trippers.go:580]     Content-Length: 264
	I0229 02:33:04.185947    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:04 GMT
	I0229 02:33:04.185947    8616 round_trippers.go:580]     Audit-Id: be676fef-a58f-4736-a051-09431c03b562
	I0229 02:33:04.186019    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:04.186019    8616 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0229 02:33:04.186019    8616 api_server.go:141] control plane version: v1.28.4
	I0229 02:33:04.186019    8616 api_server.go:131] duration metric: took 5.0285611s to wait for apiserver health ...
	I0229 02:33:04.186019    8616 cni.go:84] Creating CNI manager for ""
	I0229 02:33:04.186019    8616 cni.go:136] 3 nodes found, recommending kindnet
	I0229 02:33:04.187057    8616 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0229 02:33:04.196427    8616 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0229 02:33:04.207427    8616 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0229 02:33:04.208437    8616 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0229 02:33:04.208437    8616 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0229 02:33:04.208437    8616 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0229 02:33:04.208437    8616 command_runner.go:130] > Access: 2024-02-29 02:31:42.605077900 +0000
	I0229 02:33:04.208437    8616 command_runner.go:130] > Modify: 2024-02-23 03:39:37.000000000 +0000
	I0229 02:33:04.208437    8616 command_runner.go:130] > Change: 2024-02-29 02:31:30.415000000 +0000
	I0229 02:33:04.208437    8616 command_runner.go:130] >  Birth: -
	I0229 02:33:04.208437    8616 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0229 02:33:04.208437    8616 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0229 02:33:04.260319    8616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0229 02:33:05.710766    8616 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0229 02:33:05.716805    8616 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0229 02:33:05.721771    8616 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0229 02:33:05.740472    8616 command_runner.go:130] > daemonset.apps/kindnet configured
	I0229 02:33:05.743711    8616 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.4832447s)
	I0229 02:33:05.743711    8616 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:33:05.743711    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods
	I0229 02:33:05.743711    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:05.743711    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:05.743711    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:05.749609    8616 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 02:33:05.749609    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:05.749609    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:05.749609    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:05.749702    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:05 GMT
	I0229 02:33:05.749702    8616 round_trippers.go:580]     Audit-Id: eb6c2d27-fa8d-4989-bb14-6c511ce87e4a
	I0229 02:33:05.749702    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:05.749702    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:05.751762    8616 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1388"},"items":[{"metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1320","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 81051 chars]
	I0229 02:33:05.758120    8616 system_pods.go:59] 12 kube-system pods found
	I0229 02:33:05.758199    8616 system_pods.go:61] "coredns-5dd5756b68-8g6tg" [ef7fb259-9f24-4645-9eff-2b16f6789e1b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 02:33:05.758199    8616 system_pods.go:61] "etcd-multinode-314500" [b4f5f225-c7b2-4d26-a0ad-f09b2045ea14] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 02:33:05.758199    8616 system_pods.go:61] "kindnet-6r7b8" [402c3ac1-05a9-45f1-aa7d-c0fb8ced6c87] Running
	I0229 02:33:05.758199    8616 system_pods.go:61] "kindnet-7g9t8" [1bbebf1c-4e33-40cb-915e-6df5982dbf0c] Running
	I0229 02:33:05.758199    8616 system_pods.go:61] "kindnet-t9r77" [4620d417-744c-4049-82ab-79d1ee7f047c] Running
	I0229 02:33:05.758199    8616 system_pods.go:61] "kube-apiserver-multinode-314500" [d64133c2-8b75-4b12-b270-cbd060c1374e] Pending
	I0229 02:33:05.758199    8616 system_pods.go:61] "kube-controller-manager-multinode-314500" [58e57902-e113-44a9-b5b5-4aba2ba13491] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 02:33:05.758199    8616 system_pods.go:61] "kube-proxy-4gbrl" [accb56cb-79ee-4f16-b05e-91bf554c4a60] Running
	I0229 02:33:05.758199    8616 system_pods.go:61] "kube-proxy-6r6j4" [2b84b22d-3786-4f9e-a23a-c7cfc93bb671] Running
	I0229 02:33:05.758199    8616 system_pods.go:61] "kube-proxy-zvlt2" [0f29dabe-dc06-4460-bf19-55470247dbcc] Running
	I0229 02:33:05.758199    8616 system_pods.go:61] "kube-scheduler-multinode-314500" [31fcecc6-17de-43a6-892d-37cd915de64b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 02:33:05.758199    8616 system_pods.go:61] "storage-provisioner" [9780520b-8ff9-408a-ab6f-41b63790ccd1] Running
	I0229 02:33:05.758199    8616 system_pods.go:74] duration metric: took 14.4867ms to wait for pod list to return data ...
	I0229 02:33:05.758199    8616 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:33:05.758199    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes
	I0229 02:33:05.758199    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:05.758199    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:05.758199    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:05.762877    8616 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:33:05.762877    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:05.762877    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:05.762877    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:05.762877    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:05.762877    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:05.762877    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:05 GMT
	I0229 02:33:05.762877    8616 round_trippers.go:580]     Audit-Id: 146476b5-daea-4b9f-99f5-b9cf5d810db0
	I0229 02:33:05.762877    8616 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1388"},"items":[{"metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1303","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 14867 chars]
	I0229 02:33:05.764516    8616 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:33:05.764597    8616 node_conditions.go:123] node cpu capacity is 2
	I0229 02:33:05.764597    8616 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:33:05.764597    8616 node_conditions.go:123] node cpu capacity is 2
	I0229 02:33:05.764597    8616 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:33:05.764665    8616 node_conditions.go:123] node cpu capacity is 2
	I0229 02:33:05.764665    8616 node_conditions.go:105] duration metric: took 6.4655ms to run NodePressure ...
	I0229 02:33:05.764665    8616 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:33:06.088777    8616 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0229 02:33:06.088848    8616 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0229 02:33:06.088982    8616 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0229 02:33:06.089230    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0229 02:33:06.089230    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:06.089230    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:06.089230    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:06.097031    8616 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 02:33:06.097031    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:06.097031    8616 round_trippers.go:580]     Audit-Id: 78e942e9-7a12-4c08-a271-06e4d7488263
	I0229 02:33:06.097031    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:06.097031    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:06.097031    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:06.097031    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:06.097031    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:06 GMT
	I0229 02:33:06.097885    8616 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1390"},"items":[{"metadata":{"name":"etcd-multinode-314500","namespace":"kube-system","uid":"b4f5f225-c7b2-4d26-a0ad-f09b2045ea14","resourceVersion":"1321","creationTimestamp":"2024-02-29T02:33:03Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.2.252:2379","kubernetes.io/config.hash":"b583592d76a92080553678603be807ce","kubernetes.io/config.mirror":"b583592d76a92080553678603be807ce","kubernetes.io/config.seen":"2024-02-29T02:32:57.667230131Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:33:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 27307 chars]
	I0229 02:33:06.099697    8616 kubeadm.go:787] kubelet initialised
	I0229 02:33:06.099697    8616 kubeadm.go:788] duration metric: took 10.715ms waiting for restarted kubelet to initialise ...
	I0229 02:33:06.099697    8616 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:33:06.099697    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods
	I0229 02:33:06.099697    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:06.099697    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:06.099697    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:06.105876    8616 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 02:33:06.105876    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:06.105876    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:06.105876    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:06.105876    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:06 GMT
	I0229 02:33:06.105876    8616 round_trippers.go:580]     Audit-Id: 1ae40390-de03-4993-809d-9f81a9a87c72
	I0229 02:33:06.105876    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:06.105876    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:06.106861    8616 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1390"},"items":[{"metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1320","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 81051 chars]
	I0229 02:33:06.110864    8616 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace to be "Ready" ...
	I0229 02:33:06.110864    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:33:06.110864    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:06.110864    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:06.110864    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:06.114898    8616 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:33:06.114898    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:06.114898    8616 round_trippers.go:580]     Audit-Id: c2ef9c2f-eab3-40c1-af03-1485254e7ba5
	I0229 02:33:06.114898    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:06.114898    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:06.114898    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:06.114898    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:06.114898    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:06 GMT
	I0229 02:33:06.114898    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1320","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0229 02:33:06.115874    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:06.115874    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:06.115874    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:06.115874    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:06.118867    8616 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:33:06.118867    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:06.118867    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:06.118867    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:06.118867    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:06.118867    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:06.118867    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:06 GMT
	I0229 02:33:06.118867    8616 round_trippers.go:580]     Audit-Id: 99ac90fe-94fb-4e2d-a557-7f837bc23105
	I0229 02:33:06.118867    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1303","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5363 chars]
	I0229 02:33:06.119869    8616 pod_ready.go:97] node "multinode-314500" hosting pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-314500" has status "Ready":"False"
	I0229 02:33:06.119869    8616 pod_ready.go:81] duration metric: took 9.0046ms waiting for pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace to be "Ready" ...
	E0229 02:33:06.119869    8616 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-314500" hosting pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-314500" has status "Ready":"False"
	I0229 02:33:06.119869    8616 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:33:06.119869    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-314500
	I0229 02:33:06.119869    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:06.119869    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:06.119869    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:06.123920    8616 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:33:06.123920    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:06.123920    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:06 GMT
	I0229 02:33:06.123920    8616 round_trippers.go:580]     Audit-Id: db029b62-02fc-4fc3-861e-2a4135a7eca7
	I0229 02:33:06.123920    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:06.123920    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:06.123920    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:06.123920    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:06.123920    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-314500","namespace":"kube-system","uid":"b4f5f225-c7b2-4d26-a0ad-f09b2045ea14","resourceVersion":"1321","creationTimestamp":"2024-02-29T02:33:03Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.2.252:2379","kubernetes.io/config.hash":"b583592d76a92080553678603be807ce","kubernetes.io/config.mirror":"b583592d76a92080553678603be807ce","kubernetes.io/config.seen":"2024-02-29T02:32:57.667230131Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:33:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6077 chars]
	I0229 02:33:06.124746    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:06.124746    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:06.124746    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:06.124746    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:06.128044    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:33:06.128044    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:06.128044    8616 round_trippers.go:580]     Audit-Id: 19ddd4c2-a965-42eb-926f-a90e2d2238ff
	I0229 02:33:06.128044    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:06.128044    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:06.128044    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:06.128044    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:06.128044    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:06 GMT
	I0229 02:33:06.128044    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1303","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5363 chars]
	I0229 02:33:06.128044    8616 pod_ready.go:97] node "multinode-314500" hosting pod "etcd-multinode-314500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-314500" has status "Ready":"False"
	I0229 02:33:06.128044    8616 pod_ready.go:81] duration metric: took 8.1749ms waiting for pod "etcd-multinode-314500" in "kube-system" namespace to be "Ready" ...
	E0229 02:33:06.128044    8616 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-314500" hosting pod "etcd-multinode-314500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-314500" has status "Ready":"False"
	I0229 02:33:06.128044    8616 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:33:06.129072    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-314500
	I0229 02:33:06.129072    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:06.129072    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:06.129072    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:06.132049    8616 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:33:06.132049    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:06.132049    8616 round_trippers.go:580]     Audit-Id: 5049b653-af21-49c1-8c31-5d4854fc4594
	I0229 02:33:06.132049    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:06.132049    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:06.132049    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:06.132049    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:06.132049    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:06 GMT
	I0229 02:33:06.132472    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-314500","namespace":"kube-system","uid":"d64133c2-8b75-4b12-b270-cbd060c1374e","resourceVersion":"1325","creationTimestamp":"2024-02-29T02:33:04Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.2.252:8443","kubernetes.io/config.hash":"462233dfd1884b55b9575973e0f20340","kubernetes.io/config.mirror":"462233dfd1884b55b9575973e0f20340","kubernetes.io/config.seen":"2024-02-29T02:32:57.667231431Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:33:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 5619 chars]
	I0229 02:33:06.132982    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:06.132982    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:06.133042    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:06.133042    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:06.137452    8616 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:33:06.137452    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:06.137452    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:06.137452    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:06 GMT
	I0229 02:33:06.137452    8616 round_trippers.go:580]     Audit-Id: ef989e32-df5e-4b5d-a03c-74ffcb5794c2
	I0229 02:33:06.137452    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:06.137452    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:06.137452    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:06.138104    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1303","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5363 chars]
	I0229 02:33:06.139119    8616 pod_ready.go:97] node "multinode-314500" hosting pod "kube-apiserver-multinode-314500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-314500" has status "Ready":"False"
	I0229 02:33:06.139119    8616 pod_ready.go:81] duration metric: took 11.0738ms waiting for pod "kube-apiserver-multinode-314500" in "kube-system" namespace to be "Ready" ...
	E0229 02:33:06.139119    8616 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-314500" hosting pod "kube-apiserver-multinode-314500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-314500" has status "Ready":"False"
	I0229 02:33:06.139119    8616 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:33:06.139119    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-314500
	I0229 02:33:06.139119    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:06.139119    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:06.139119    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:06.143154    8616 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:33:06.143154    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:06.143154    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:06.143154    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:06.143154    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:06 GMT
	I0229 02:33:06.143154    8616 round_trippers.go:580]     Audit-Id: ad636bef-d3fc-400e-aeab-21c0d663752e
	I0229 02:33:06.143154    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:06.143154    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:06.143154    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-314500","namespace":"kube-system","uid":"58e57902-e113-44a9-b5b5-4aba2ba13491","resourceVersion":"1308","creationTimestamp":"2024-02-29T02:15:52Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"46f4a0cce9ca64e19c1ad09d6f30ce1e","kubernetes.io/config.mirror":"46f4a0cce9ca64e19c1ad09d6f30ce1e","kubernetes.io/config.seen":"2024-02-29T02:15:52.221398986Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:15:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7433 chars]
	I0229 02:33:06.144086    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:06.144086    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:06.144086    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:06.144086    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:06.146106    8616 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:33:06.146106    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:06.146106    8616 round_trippers.go:580]     Audit-Id: 998b3e0f-9cc9-4dbe-b956-70533f33934e
	I0229 02:33:06.146106    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:06.146106    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:06.146106    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:06.146106    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:06.146106    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:06 GMT
	I0229 02:33:06.147089    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1303","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5363 chars]
	I0229 02:33:06.147089    8616 pod_ready.go:97] node "multinode-314500" hosting pod "kube-controller-manager-multinode-314500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-314500" has status "Ready":"False"
	I0229 02:33:06.147089    8616 pod_ready.go:81] duration metric: took 7.9699ms waiting for pod "kube-controller-manager-multinode-314500" in "kube-system" namespace to be "Ready" ...
	E0229 02:33:06.147089    8616 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-314500" hosting pod "kube-controller-manager-multinode-314500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-314500" has status "Ready":"False"
	I0229 02:33:06.147089    8616 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4gbrl" in "kube-system" namespace to be "Ready" ...
	I0229 02:33:06.346238    8616 request.go:629] Waited for 199.1377ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4gbrl
	I0229 02:33:06.346238    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4gbrl
	I0229 02:33:06.346238    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:06.346238    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:06.346238    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:06.350817    8616 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:33:06.350817    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:06.350817    8616 round_trippers.go:580]     Audit-Id: c9373f80-8eb2-4992-aa9e-b437781a556c
	I0229 02:33:06.350817    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:06.350817    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:06.350817    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:06.350817    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:06.350817    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:06 GMT
	I0229 02:33:06.351353    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4gbrl","generateName":"kube-proxy-","namespace":"kube-system","uid":"accb56cb-79ee-4f16-b05e-91bf554c4a60","resourceVersion":"606","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"99934fe5-0d72-4e83-8f59-4a0b59969008","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"99934fe5-0d72-4e83-8f59-4a0b59969008\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I0229 02:33:06.550560    8616 request.go:629] Waited for 198.2041ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.252:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:33:06.550560    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:33:06.550560    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:06.550560    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:06.550560    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:06.558683    8616 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 02:33:06.558745    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:06.558745    8616 round_trippers.go:580]     Audit-Id: b5cafabb-d076-487d-8275-2401c4b9ad34
	I0229 02:33:06.558745    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:06.558745    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:06.558745    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:06.558745    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:06.558745    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:06 GMT
	I0229 02:33:06.560107    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"1213","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_28_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager": [truncated 3818 chars]
	I0229 02:33:06.560853    8616 pod_ready.go:92] pod "kube-proxy-4gbrl" in "kube-system" namespace has status "Ready":"True"
	I0229 02:33:06.560904    8616 pod_ready.go:81] duration metric: took 413.7919ms waiting for pod "kube-proxy-4gbrl" in "kube-system" namespace to be "Ready" ...
	I0229 02:33:06.560942    8616 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6r6j4" in "kube-system" namespace to be "Ready" ...
	I0229 02:33:06.755977    8616 request.go:629] Waited for 194.6511ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6r6j4
	I0229 02:33:06.755977    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6r6j4
	I0229 02:33:06.755977    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:06.755977    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:06.755977    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:06.760891    8616 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:33:06.760891    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:06.760891    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:06.760891    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:06 GMT
	I0229 02:33:06.760891    8616 round_trippers.go:580]     Audit-Id: 7dce0926-0056-4af2-8c1b-358fe7244683
	I0229 02:33:06.760891    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:06.760891    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:06.760891    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:06.761845    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6r6j4","generateName":"kube-proxy-","namespace":"kube-system","uid":"2b84b22d-3786-4f9e-a23a-c7cfc93bb671","resourceVersion":"1324","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"99934fe5-0d72-4e83-8f59-4a0b59969008","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"99934fe5-0d72-4e83-8f59-4a0b59969008\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5735 chars]
	I0229 02:33:06.944601    8616 request.go:629] Waited for 182.6078ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:06.944819    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:06.944883    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:06.944883    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:06.944940    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:06.947295    8616 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:33:06.948321    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:06.948321    8616 round_trippers.go:580]     Audit-Id: 42b79a43-e77b-4b16-80fa-7ae036d01d59
	I0229 02:33:06.948321    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:06.948321    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:06.948374    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:06.948374    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:06.948409    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:07 GMT
	I0229 02:33:06.948862    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1303","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5363 chars]
	I0229 02:33:06.949422    8616 pod_ready.go:97] node "multinode-314500" hosting pod "kube-proxy-6r6j4" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-314500" has status "Ready":"False"
	I0229 02:33:06.949479    8616 pod_ready.go:81] duration metric: took 388.5154ms waiting for pod "kube-proxy-6r6j4" in "kube-system" namespace to be "Ready" ...
	E0229 02:33:06.949479    8616 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-314500" hosting pod "kube-proxy-6r6j4" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-314500" has status "Ready":"False"
	I0229 02:33:06.949479    8616 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zvlt2" in "kube-system" namespace to be "Ready" ...
	I0229 02:33:07.149139    8616 request.go:629] Waited for 199.6489ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zvlt2
	I0229 02:33:07.149693    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zvlt2
	I0229 02:33:07.149731    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:07.149731    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:07.149731    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:07.156611    8616 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 02:33:07.156611    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:07.156611    8616 round_trippers.go:580]     Audit-Id: 2a634bfd-da1f-4f3b-8439-fc3719d25349
	I0229 02:33:07.156611    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:07.156611    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:07.156611    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:07.156611    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:07.156611    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:07 GMT
	I0229 02:33:07.156611    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-zvlt2","generateName":"kube-proxy-","namespace":"kube-system","uid":"0f29dabe-dc06-4460-bf19-55470247dbcc","resourceVersion":"1230","creationTimestamp":"2024-02-29T02:28:50Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"99934fe5-0d72-4e83-8f59-4a0b59969008","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:28:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"99934fe5-0d72-4e83-8f59-4a0b59969008\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5534 chars]
	I0229 02:33:07.353830    8616 request.go:629] Waited for 196.3315ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.252:8443/api/v1/nodes/multinode-314500-m03
	I0229 02:33:07.354071    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500-m03
	I0229 02:33:07.354071    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:07.354071    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:07.354129    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:07.358806    8616 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:33:07.358806    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:07.358806    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:07.358806    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:07.358806    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:07 GMT
	I0229 02:33:07.358806    8616 round_trippers.go:580]     Audit-Id: 09ff3e77-c10f-4360-a2ab-acf700cab0c2
	I0229 02:33:07.358806    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:07.358806    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:07.359457    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m03","uid":"e3855f89-f53a-45b3-8e99-79bb2f21bdb0","resourceVersion":"1265","creationTimestamp":"2024-02-29T02:28:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_28_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:28:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3648 chars]
	I0229 02:33:07.359784    8616 pod_ready.go:92] pod "kube-proxy-zvlt2" in "kube-system" namespace has status "Ready":"True"
	I0229 02:33:07.359784    8616 pod_ready.go:81] duration metric: took 410.2817ms waiting for pod "kube-proxy-zvlt2" in "kube-system" namespace to be "Ready" ...
	I0229 02:33:07.359784    8616 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:33:07.556808    8616 request.go:629] Waited for 196.7132ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-314500
	I0229 02:33:07.556990    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-314500
	I0229 02:33:07.556990    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:07.557048    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:07.557048    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:07.560852    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:33:07.560927    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:07.560927    8616 round_trippers.go:580]     Audit-Id: 46f3d3bd-4186-41d0-bf19-f9632bd98b32
	I0229 02:33:07.560927    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:07.560927    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:07.560927    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:07.560927    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:07.560995    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:07 GMT
	I0229 02:33:07.561192    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-314500","namespace":"kube-system","uid":"31fcecc6-17de-43a6-892d-37cd915de64b","resourceVersion":"1397","creationTimestamp":"2024-02-29T02:15:52Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"3d9a79ff068a0922524863a8caa5053a","kubernetes.io/config.mirror":"3d9a79ff068a0922524863a8caa5053a","kubernetes.io/config.seen":"2024-02-29T02:15:52.221399886Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:15:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5145 chars]
	I0229 02:33:07.746710    8616 request.go:629] Waited for 184.7344ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:07.746710    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:07.746710    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:07.746710    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:07.746710    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:07.750870    8616 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:33:07.750870    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:07.750870    8616 round_trippers.go:580]     Audit-Id: bd6d835b-37a0-49e5-978e-9215510e2b69
	I0229 02:33:07.750870    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:07.750870    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:07.750870    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:07.750870    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:07.750870    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:07 GMT
	I0229 02:33:07.750870    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1303","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5363 chars]
	I0229 02:33:07.751578    8616 pod_ready.go:97] node "multinode-314500" hosting pod "kube-scheduler-multinode-314500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-314500" has status "Ready":"False"
	I0229 02:33:07.751578    8616 pod_ready.go:81] duration metric: took 391.7725ms waiting for pod "kube-scheduler-multinode-314500" in "kube-system" namespace to be "Ready" ...
	E0229 02:33:07.751578    8616 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-314500" hosting pod "kube-scheduler-multinode-314500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-314500" has status "Ready":"False"
	I0229 02:33:07.751578    8616 pod_ready.go:38] duration metric: took 1.6517887s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:33:07.751578    8616 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 02:33:07.770431    8616 command_runner.go:130] > -16
	I0229 02:33:07.770584    8616 ops.go:34] apiserver oom_adj: -16
	I0229 02:33:07.770691    8616 kubeadm.go:640] restartCluster took 12.8081182s
	I0229 02:33:07.770744    8616 kubeadm.go:406] StartCluster complete in 12.8823334s
	I0229 02:33:07.770744    8616 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:33:07.771114    8616 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 02:33:07.772342    8616 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:33:07.773391    8616 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 02:33:07.773391    8616 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 02:33:07.774414    8616 out.go:177] * Enabled addons: 
	I0229 02:33:07.774414    8616 config.go:182] Loaded profile config "multinode-314500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 02:33:07.775345    8616 addons.go:505] enable addons completed in 1.9539ms: enabled=[]
	I0229 02:33:07.785396    8616 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 02:33:07.785396    8616 kapi.go:59] client config for multinode-314500: &rest.Config{Host:"https://172.19.2.252:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2480600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 02:33:07.786385    8616 cert_rotation.go:137] Starting client certificate rotation controller
	I0229 02:33:07.787391    8616 round_trippers.go:463] GET https://172.19.2.252:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0229 02:33:07.787391    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:07.787391    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:07.787391    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:07.802350    8616 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0229 02:33:07.803289    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:07.803289    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:07.803289    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:07.803289    8616 round_trippers.go:580]     Content-Length: 292
	I0229 02:33:07.803289    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:07 GMT
	I0229 02:33:07.803357    8616 round_trippers.go:580]     Audit-Id: d7dfd584-ccc9-45cd-a225-80d574728435
	I0229 02:33:07.803357    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:07.803357    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:07.803392    8616 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b4cd7015-a823-43da-bf82-ae91c5678262","resourceVersion":"1389","creationTimestamp":"2024-02-29T02:15:51Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0229 02:33:07.803392    8616 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-314500" context rescaled to 1 replicas
	I0229 02:33:07.803392    8616 start.go:223] Will wait 6m0s for node &{Name: IP:172.19.2.252 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 02:33:07.804409    8616 out.go:177] * Verifying Kubernetes components...
	I0229 02:33:07.812998    8616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:33:08.001039    8616 command_runner.go:130] > apiVersion: v1
	I0229 02:33:08.001122    8616 command_runner.go:130] > data:
	I0229 02:33:08.001185    8616 command_runner.go:130] >   Corefile: |
	I0229 02:33:08.001185    8616 command_runner.go:130] >     .:53 {
	I0229 02:33:08.001185    8616 command_runner.go:130] >         log
	I0229 02:33:08.001185    8616 command_runner.go:130] >         errors
	I0229 02:33:08.001185    8616 command_runner.go:130] >         health {
	I0229 02:33:08.001185    8616 command_runner.go:130] >            lameduck 5s
	I0229 02:33:08.001185    8616 command_runner.go:130] >         }
	I0229 02:33:08.001185    8616 command_runner.go:130] >         ready
	I0229 02:33:08.001185    8616 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0229 02:33:08.001185    8616 command_runner.go:130] >            pods insecure
	I0229 02:33:08.001185    8616 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0229 02:33:08.001185    8616 command_runner.go:130] >            ttl 30
	I0229 02:33:08.001185    8616 command_runner.go:130] >         }
	I0229 02:33:08.001185    8616 command_runner.go:130] >         prometheus :9153
	I0229 02:33:08.001185    8616 command_runner.go:130] >         hosts {
	I0229 02:33:08.001185    8616 command_runner.go:130] >            172.19.0.1 host.minikube.internal
	I0229 02:33:08.001185    8616 command_runner.go:130] >            fallthrough
	I0229 02:33:08.001185    8616 command_runner.go:130] >         }
	I0229 02:33:08.001185    8616 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0229 02:33:08.001185    8616 command_runner.go:130] >            max_concurrent 1000
	I0229 02:33:08.001185    8616 command_runner.go:130] >         }
	I0229 02:33:08.001185    8616 command_runner.go:130] >         cache 30
	I0229 02:33:08.001185    8616 command_runner.go:130] >         loop
	I0229 02:33:08.001185    8616 command_runner.go:130] >         reload
	I0229 02:33:08.001185    8616 command_runner.go:130] >         loadbalance
	I0229 02:33:08.001185    8616 command_runner.go:130] >     }
	I0229 02:33:08.001185    8616 command_runner.go:130] > kind: ConfigMap
	I0229 02:33:08.001185    8616 command_runner.go:130] > metadata:
	I0229 02:33:08.001185    8616 command_runner.go:130] >   creationTimestamp: "2024-02-29T02:15:51Z"
	I0229 02:33:08.001185    8616 command_runner.go:130] >   name: coredns
	I0229 02:33:08.001185    8616 command_runner.go:130] >   namespace: kube-system
	I0229 02:33:08.001185    8616 command_runner.go:130] >   resourceVersion: "388"
	I0229 02:33:08.001715    8616 command_runner.go:130] >   uid: 3fc93d17-14a4-4d49-9f77-f2cd8adceaed
	I0229 02:33:08.001838    8616 node_ready.go:35] waiting up to 6m0s for node "multinode-314500" to be "Ready" ...
	I0229 02:33:08.001838    8616 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0229 02:33:08.001838    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:08.001838    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:08.001838    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:08.001838    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:08.005408    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:33:08.006439    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:08.006439    8616 round_trippers.go:580]     Audit-Id: 9edc4ba4-2d08-4207-993d-cdc1e9a147e8
	I0229 02:33:08.006439    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:08.006439    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:08.006439    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:08.006439    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:08.006439    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:08 GMT
	I0229 02:33:08.006439    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:33:08.006439    8616 node_ready.go:49] node "multinode-314500" has status "Ready":"True"
	I0229 02:33:08.006439    8616 node_ready.go:38] duration metric: took 4.6011ms waiting for node "multinode-314500" to be "Ready" ...
	I0229 02:33:08.006439    8616 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:33:08.155552    8616 request.go:629] Waited for 148.953ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods
	I0229 02:33:08.155904    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods
	I0229 02:33:08.155924    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:08.155924    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:08.155924    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:08.162327    8616 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 02:33:08.162327    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:08.162327    8616 round_trippers.go:580]     Audit-Id: d5f748c5-771a-4565-9050-b319d3be7f39
	I0229 02:33:08.162327    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:08.162327    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:08.162327    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:08.162327    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:08.162509    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:08 GMT
	I0229 02:33:08.164070    8616 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1398"},"items":[{"metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1320","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83067 chars]
	I0229 02:33:08.168459    8616 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace to be "Ready" ...
	I0229 02:33:08.358684    8616 request.go:629] Waited for 190.2147ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:33:08.358684    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:33:08.358684    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:08.358684    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:08.358684    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:08.362347    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:33:08.363167    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:08.363167    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:08.363167    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:08.363167    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:08.363409    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:08.363409    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:08 GMT
	I0229 02:33:08.363409    8616 round_trippers.go:580]     Audit-Id: 0dd031d6-1872-4069-acac-3222a3056e71
	I0229 02:33:08.363785    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1320","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0229 02:33:08.544416    8616 request.go:629] Waited for 179.4247ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:08.544762    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:08.544762    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:08.544762    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:08.544762    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:08.549352    8616 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:33:08.549352    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:08.549436    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:08 GMT
	I0229 02:33:08.549436    8616 round_trippers.go:580]     Audit-Id: b814cce0-f861-422f-9324-6f279b27a1c1
	I0229 02:33:08.549436    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:08.549436    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:08.549436    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:08.549436    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:08.550069    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:33:08.747592    8616 request.go:629] Waited for 78.2283ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:33:08.747592    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:33:08.747592    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:08.747592    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:08.747592    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:08.752268    8616 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:33:08.752487    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:08.752487    8616 round_trippers.go:580]     Audit-Id: 880d199f-9e96-4269-82e1-efa85dcc6b11
	I0229 02:33:08.752487    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:08.752487    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:08.752487    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:08.752487    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:08.752487    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:08 GMT
	I0229 02:33:08.752625    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1320","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0229 02:33:08.951845    8616 request.go:629] Waited for 198.3927ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:08.952224    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:08.952224    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:08.952295    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:08.952295    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:08.956678    8616 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:33:08.956678    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:08.956678    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:08.956678    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:09 GMT
	I0229 02:33:08.956678    8616 round_trippers.go:580]     Audit-Id: 4de446ed-b378-4d2a-a6dc-65eeef0442a4
	I0229 02:33:08.956678    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:08.956810    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:08.956810    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:08.957342    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:33:09.172828    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:33:09.172828    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:09.172828    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:09.172828    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:09.176404    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:33:09.176404    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:09.177409    8616 round_trippers.go:580]     Audit-Id: e2e12924-7fbd-4c9f-8912-45b6a4047274
	I0229 02:33:09.177409    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:09.177409    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:09.177409    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:09.177441    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:09.177441    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:09 GMT
	I0229 02:33:09.178250    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1320","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0229 02:33:09.356713    8616 request.go:629] Waited for 177.5519ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:09.356806    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:09.356929    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:09.356929    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:09.356929    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:09.360456    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:33:09.360456    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:09.360456    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:09 GMT
	I0229 02:33:09.360456    8616 round_trippers.go:580]     Audit-Id: 5de96f36-0e2e-493b-a717-65acd6ae3f1f
	I0229 02:33:09.360456    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:09.360456    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:09.361400    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:09.361400    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:09.361606    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:33:09.683176    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:33:09.683176    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:09.683285    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:09.683285    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:09.686711    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:33:09.686711    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:09.686711    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:09.686711    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:09 GMT
	I0229 02:33:09.686711    8616 round_trippers.go:580]     Audit-Id: c48e3404-2915-49b5-a3dc-a4daf5ab8e77
	I0229 02:33:09.686711    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:09.686711    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:09.686711    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:09.687491    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1320","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0229 02:33:09.744664    8616 request.go:629] Waited for 56.0085ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:09.744947    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:09.744947    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:09.745035    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:09.745035    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:09.751320    8616 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 02:33:09.751320    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:09.751320    8616 round_trippers.go:580]     Audit-Id: 5fa27a10-6bca-4a48-99a5-52a67e93fa27
	I0229 02:33:09.751320    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:09.751320    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:09.751320    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:09.751320    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:09.751320    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:09 GMT
	I0229 02:33:09.751320    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:33:10.182307    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:33:10.182498    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:10.182498    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:10.182498    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:10.187680    8616 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 02:33:10.187785    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:10.187785    8616 round_trippers.go:580]     Audit-Id: ad7c29a6-a2db-4022-af46-b28e44ca85ac
	I0229 02:33:10.187785    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:10.187785    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:10.187785    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:10.187785    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:10.187785    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:10 GMT
	I0229 02:33:10.188875    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1320","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0229 02:33:10.189729    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:10.189786    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:10.189786    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:10.189786    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:10.192526    8616 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:33:10.192526    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:10.192526    8616 round_trippers.go:580]     Audit-Id: 509c15c5-32f1-4d56-905e-a30fefd8d3d0
	I0229 02:33:10.192526    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:10.192526    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:10.192526    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:10.192526    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:10.192526    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:10 GMT
	I0229 02:33:10.193391    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:33:10.194035    8616 pod_ready.go:102] pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:10.671182    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:33:10.671248    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:10.671248    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:10.671248    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:10.674494    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:33:10.674494    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:10.674494    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:10 GMT
	I0229 02:33:10.674494    8616 round_trippers.go:580]     Audit-Id: c6031b95-0102-4393-9191-98b6d9209009
	I0229 02:33:10.674494    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:10.674494    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:10.674494    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:10.674494    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:10.674494    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1320","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0229 02:33:10.675480    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:10.675480    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:10.675480    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:10.675480    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:10.679315    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:33:10.679315    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:10.679398    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:10.679398    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:10.679398    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:10 GMT
	I0229 02:33:10.679398    8616 round_trippers.go:580]     Audit-Id: 159d5087-2edc-4461-8860-6c01ecd5b649
	I0229 02:33:10.679489    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:10.679489    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:10.679911    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:33:11.181417    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:33:11.181487    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:11.181487    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:11.181487    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:11.189953    8616 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0229 02:33:11.189953    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:11.189953    8616 round_trippers.go:580]     Audit-Id: 1e1933e0-ead8-463b-8f93-9e741c968316
	I0229 02:33:11.189953    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:11.189953    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:11.189953    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:11.189953    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:11.189953    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:11 GMT
	I0229 02:33:11.190919    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1320","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0229 02:33:11.191907    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:11.191907    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:11.191907    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:11.191907    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:11.199935    8616 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0229 02:33:11.199935    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:11.199935    8616 round_trippers.go:580]     Audit-Id: ab0e2b1e-9858-414d-806f-b31ec7d7442c
	I0229 02:33:11.199935    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:11.199935    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:11.199935    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:11.199935    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:11.199935    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:11 GMT
	I0229 02:33:11.199935    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:33:11.683368    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:33:11.683931    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:11.683931    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:11.683931    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:11.688106    8616 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:33:11.688194    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:11.688194    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:11.688194    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:11 GMT
	I0229 02:33:11.688194    8616 round_trippers.go:580]     Audit-Id: d521080b-f119-4bd5-9f6f-03ef52ea8550
	I0229 02:33:11.688194    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:11.688194    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:11.688286    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:11.688437    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1320","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0229 02:33:11.689316    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:11.689393    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:11.689393    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:11.689393    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:11.692578    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:33:11.693407    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:11.693465    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:11.693465    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:11.693465    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:11.693465    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:11.693465    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:11 GMT
	I0229 02:33:11.693465    8616 round_trippers.go:580]     Audit-Id: 9416e26d-cb3d-4b09-b0a7-0d0e61e0e3b4
	I0229 02:33:11.693773    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:33:12.183670    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:33:12.183670    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:12.183670    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:12.183670    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:12.187269    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:33:12.187269    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:12.188110    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:12 GMT
	I0229 02:33:12.188110    8616 round_trippers.go:580]     Audit-Id: cbb55f54-4785-41e4-bc0d-90eee282c4ff
	I0229 02:33:12.188110    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:12.188110    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:12.188110    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:12.188110    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:12.188308    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1412","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:33:12.189119    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:12.189119    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:12.189119    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:12.189219    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:12.192855    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:33:12.192855    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:12.192855    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:12.192855    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:12.192855    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:12.192855    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:12 GMT
	I0229 02:33:12.192855    8616 round_trippers.go:580]     Audit-Id: e90f9509-db87-411e-8e43-78f30ee27c02
	I0229 02:33:12.192855    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:12.192855    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:33:12.670381    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:33:12.670381    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:12.670381    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:12.670381    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:12.674996    8616 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:33:12.675210    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:12.675210    8616 round_trippers.go:580]     Audit-Id: 250653f3-c221-475a-befc-f5f144b2d051
	I0229 02:33:12.675210    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:12.675210    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:12.675210    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:12.675210    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:12.675210    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:12 GMT
	I0229 02:33:12.676062    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1412","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:33:12.677318    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:12.677396    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:12.677396    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:12.677396    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:12.681733    8616 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:33:12.682398    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:12.682398    8616 round_trippers.go:580]     Audit-Id: ecb2e61a-1ad1-473e-8707-5958415bdec8
	I0229 02:33:12.682398    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:12.682468    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:12.682468    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:12.682468    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:12.682555    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:12 GMT
	I0229 02:33:12.682993    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:33:12.683468    8616 pod_ready.go:102] pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:13.170126    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:33:13.170126    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:13.170383    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:13.170383    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:13.175864    8616 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 02:33:13.175864    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:13.175864    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:13.175864    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:13.175864    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:13 GMT
	I0229 02:33:13.175864    8616 round_trippers.go:580]     Audit-Id: 239dab67-8ee5-46c1-a334-5beafb30d63d
	I0229 02:33:13.175864    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:13.175864    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:13.175864    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1412","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:33:13.177279    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:13.177279    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:13.177279    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:13.177279    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:13.181057    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:33:13.181057    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:13.181437    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:13.181437    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:13.181437    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:13 GMT
	I0229 02:33:13.181437    8616 round_trippers.go:580]     Audit-Id: 5a2904ba-c830-482f-af5d-2931a7dd6684
	I0229 02:33:13.181437    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:13.181437    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:13.181722    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:33:13.672335    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:33:13.672444    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:13.672444    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:13.672444    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:13.676882    8616 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:33:13.677370    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:13.677370    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:13.677370    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:13.677370    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:13 GMT
	I0229 02:33:13.677370    8616 round_trippers.go:580]     Audit-Id: 56d51b1b-ad9d-4920-87b7-e5402bbfc121
	I0229 02:33:13.677370    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:13.677370    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:13.677652    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1412","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:33:13.678624    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:13.678692    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:13.678692    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:13.678692    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:13.681774    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:33:13.681774    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:13.681774    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:13.682657    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:13.682657    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:13 GMT
	I0229 02:33:13.682657    8616 round_trippers.go:580]     Audit-Id: 76b80d7e-7004-4cfb-8c67-2925f6ae3e6b
	I0229 02:33:13.682657    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:13.682657    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:13.683066    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:33:14.174303    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:33:14.174414    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:14.174414    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:14.174414    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:14.178601    8616 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:33:14.178601    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:14.178601    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:14.178601    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:14.178601    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:14.179594    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:14.179637    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:14 GMT
	I0229 02:33:14.179637    8616 round_trippers.go:580]     Audit-Id: b3e73557-fc46-4ef0-8787-03da941d90c5
	I0229 02:33:14.179972    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1412","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:33:14.181016    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:14.181016    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:14.181016    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:14.181115    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:14.184358    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:33:14.184358    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:14.184358    8616 round_trippers.go:580]     Audit-Id: 595811ca-0834-44ee-818a-83dad6803542
	I0229 02:33:14.184358    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:14.184358    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:14.184358    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:14.184358    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:14.184358    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:14 GMT
	I0229 02:33:14.185040    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:33:14.677388    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:33:14.677388    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:14.677885    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:14.677885    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:14.681379    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:33:14.681379    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:14.681379    8616 round_trippers.go:580]     Audit-Id: ca30708d-9b0e-4e45-ae64-54cc0e98caeb
	I0229 02:33:14.681379    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:14.681379    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:14.681379    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:14.681379    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:14.681379    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:14 GMT
	I0229 02:33:14.681379    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1412","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:33:14.682684    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:14.682684    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:14.682684    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:14.682684    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:14.685258    8616 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:33:14.685258    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:14.685258    8616 round_trippers.go:580]     Audit-Id: da8778d8-88a5-435f-83cd-4fb3a496b611
	I0229 02:33:14.685258    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:14.685258    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:14.685258    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:14.685258    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:14.685258    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:14 GMT
	I0229 02:33:14.686280    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:33:14.686280    8616 pod_ready.go:102] pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:15.176521    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:33:15.176632    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:15.176632    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:15.176632    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:15.183800    8616 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 02:33:15.183800    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:15.183996    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:15.183996    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:15.183996    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:15.183996    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:15.183996    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:15 GMT
	I0229 02:33:15.183996    8616 round_trippers.go:580]     Audit-Id: 17062ca6-a6fb-4f40-9318-808b88a12320
	I0229 02:33:15.184293    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1412","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:33:15.185545    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:15.185674    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:15.185674    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:15.185742    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:15.205699    8616 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0229 02:33:15.205699    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:15.205699    8616 round_trippers.go:580]     Audit-Id: cb22d753-7091-4607-8ae6-7864c7353ac5
	I0229 02:33:15.205699    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:15.205699    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:15.205699    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:15.205699    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:15.205699    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:15 GMT
	I0229 02:33:15.206698    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:33:15.670396    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:33:15.670396    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:15.670491    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:15.670491    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:15.674544    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:33:15.674544    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:15.674544    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:15.674544    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:15.674544    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:15 GMT
	I0229 02:33:15.674544    8616 round_trippers.go:580]     Audit-Id: 05b710c2-1808-468a-a4c5-c90f30093c9e
	I0229 02:33:15.674544    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:15.674544    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:15.674848    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1412","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:33:15.675639    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:15.675639    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:15.675741    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:15.675741    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:15.679989    8616 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:33:15.680763    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:15.680763    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:15.680763    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:15 GMT
	I0229 02:33:15.680763    8616 round_trippers.go:580]     Audit-Id: 6e147b94-a661-48c6-9ca4-9633334849b1
	I0229 02:33:15.680763    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:15.680763    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:15.680763    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:15.680964    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:33:16.170887    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:33:16.170975    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:16.170975    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:16.170975    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:16.175263    8616 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:33:16.175263    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:16.175263    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:16.175263    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:16 GMT
	I0229 02:33:16.175263    8616 round_trippers.go:580]     Audit-Id: 4c709568-f35c-4de3-aaac-a16e15a28876
	I0229 02:33:16.175263    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:16.175263    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:16.175263    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:16.176150    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1412","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:33:16.176763    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:16.176763    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:16.176763    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:16.176763    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:16.182788    8616 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 02:33:16.182788    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:16.182788    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:16.182788    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:16.182788    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:16.182788    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:16 GMT
	I0229 02:33:16.182788    8616 round_trippers.go:580]     Audit-Id: e955ff82-7250-473b-8f14-b049af479539
	I0229 02:33:16.182788    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:16.183462    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:33:16.672104    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:33:16.672520    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:16.672582    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:16.672582    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:16.676786    8616 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:33:16.676786    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:16.676786    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:16.677172    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:16.677172    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:16.677172    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:16 GMT
	I0229 02:33:16.677172    8616 round_trippers.go:580]     Audit-Id: c93c543c-de98-4ac2-a7ef-ef13b9836561
	I0229 02:33:16.677172    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:16.677542    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1412","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:33:16.677859    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:16.677859    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:16.677859    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:16.677859    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:16.681506    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:33:16.682117    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:16.682117    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:16.682117    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:16.682117    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:16 GMT
	I0229 02:33:16.682117    8616 round_trippers.go:580]     Audit-Id: e643d3e9-97bd-4f05-99d9-5fd72451d236
	I0229 02:33:16.682117    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:16.682117    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:16.682921    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:33:17.170016    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:33:17.170084    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:17.170084    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:17.170172    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:17.173548    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:33:17.173548    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:17.173548    8616 round_trippers.go:580]     Audit-Id: bf55b6c9-189d-4e41-a701-5933fd8c68e2
	I0229 02:33:17.173548    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:17.173548    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:17.173548    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:17.173548    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:17.173548    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:17 GMT
	I0229 02:33:17.174461    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1412","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:33:17.175131    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:17.175131    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:17.175131    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:17.175131    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:17.178452    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:33:17.178452    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:17.178452    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:17.178452    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:17 GMT
	I0229 02:33:17.178452    8616 round_trippers.go:580]     Audit-Id: fe5e20c8-1f09-40aa-ab9a-19dcc249be0a
	I0229 02:33:17.178452    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:17.178452    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:17.178452    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:17.179168    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:33:17.179643    8616 pod_ready.go:102] pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:17.680569    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:33:17.680569    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:17.680682    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:17.680682    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:17.684134    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:33:17.684134    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:17.684134    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:17.684134    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:17 GMT
	I0229 02:33:17.684134    8616 round_trippers.go:580]     Audit-Id: 422080bb-1dba-4c43-bbba-075c75681768
	I0229 02:33:17.684134    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:17.684134    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:17.684134    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:17.684134    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1412","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:33:17.685132    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:17.685132    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:17.685132    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:17.685132    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:17.689217    8616 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:33:17.689217    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:17.689217    8616 round_trippers.go:580]     Audit-Id: 5265bce3-a25c-46d1-ad6e-9f3817c1eeed
	I0229 02:33:17.689217    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:17.689217    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:17.689217    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:17.689217    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:17.689217    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:17 GMT
	I0229 02:33:17.689217    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:33:18.181424    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:33:18.181748    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:18.181748    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:18.181748    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:18.185912    8616 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:33:18.185912    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:18.185912    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:18.185912    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:18.185912    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:18.185912    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:18 GMT
	I0229 02:33:18.185912    8616 round_trippers.go:580]     Audit-Id: 7cf86a6b-d541-44d8-9c63-c4b3e6e6f64f
	I0229 02:33:18.185912    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:18.186819    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1412","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:33:18.187486    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:18.187583    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:18.187583    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:18.187583    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:18.190723    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:33:18.191172    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:18.191172    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:18.191172    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:18.191172    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:18 GMT
	I0229 02:33:18.191172    8616 round_trippers.go:580]     Audit-Id: ebd655cf-ac4c-45c5-96e4-1a008b4e903f
	I0229 02:33:18.191172    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:18.191172    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:18.191645    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:33:18.681234    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:33:18.681234    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:18.681234    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:18.681234    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:18.684989    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:33:18.684989    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:18.684989    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:18.684989    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:18.684989    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:18.684989    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:18 GMT
	I0229 02:33:18.684989    8616 round_trippers.go:580]     Audit-Id: dd357638-0ae3-4f18-b8c1-3ba3a4cbd38a
	I0229 02:33:18.685978    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:18.686346    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1412","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:33:18.687486    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:18.687553    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:18.687623    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:18.687689    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:18.689995    8616 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:33:18.689995    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:18.689995    8616 round_trippers.go:580]     Audit-Id: 263dcc28-dbc3-418e-8b7d-f510ad7d54d8
	I0229 02:33:18.690982    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:18.690982    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:18.691004    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:18.691004    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:18.691004    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:18 GMT
	I0229 02:33:18.691298    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:33:19.180857    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:33:19.180857    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:19.180857    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:19.180857    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:19.184416    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:33:19.184416    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:19.184416    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:19.184416    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:19.185431    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:19.185431    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:19 GMT
	I0229 02:33:19.185431    8616 round_trippers.go:580]     Audit-Id: c6bfccc3-560f-4a9b-9618-eac76b3aa4c1
	I0229 02:33:19.185431    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:19.185492    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1412","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:33:19.186568    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:19.186568    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:19.186568    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:19.186568    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:19.189273    8616 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:33:19.189273    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:19.189273    8616 round_trippers.go:580]     Audit-Id: 8595d7eb-6a1a-4b91-9e03-f956110357bc
	I0229 02:33:19.189273    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:19.189273    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:19.189273    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:19.189273    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:19.189273    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:19 GMT
	I0229 02:33:19.191018    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:33:19.191466    8616 pod_ready.go:102] pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:19.681575    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:33:19.681767    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:19.681767    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:19.681767    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:19.688364    8616 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 02:33:19.688364    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:19.688364    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:19.688364    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:19.688364    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:19.688364    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:19 GMT
	I0229 02:33:19.688364    8616 round_trippers.go:580]     Audit-Id: f7b1a94b-705d-4ba7-a158-64eb404bcefd
	I0229 02:33:19.688364    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:19.688364    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1412","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:33:19.690072    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:19.690072    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:19.690133    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:19.690133    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:19.692946    8616 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:33:19.692946    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:19.692946    8616 round_trippers.go:580]     Audit-Id: 648ef05d-e528-4030-b57d-8878151c497b
	I0229 02:33:19.692946    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:19.692946    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:19.692946    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:19.692946    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:19.692946    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:19 GMT
	I0229 02:33:19.694484    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:33:20.181668    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:33:20.181743    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:20.181743    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:20.181743    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:20.186313    8616 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:33:20.186313    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:20.186313    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:20.186313    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:20.186313    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:20.186313    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:20 GMT
	I0229 02:33:20.186313    8616 round_trippers.go:580]     Audit-Id: 9a91a559-4fdd-45e5-a43a-60c09d103290
	I0229 02:33:20.186313    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:20.186933    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1412","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:33:20.187589    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:20.187678    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:20.187678    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:20.187678    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:20.191070    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:33:20.191070    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:20.191070    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:20.191070    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:20.191070    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:20.191070    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:20 GMT
	I0229 02:33:20.191070    8616 round_trippers.go:580]     Audit-Id: a894c43a-09aa-4810-9022-d3d16c42b2bb
	I0229 02:33:20.191070    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:20.192323    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:33:20.680411    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:33:20.680498    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:20.680498    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:20.680498    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:20.684855    8616 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:33:20.684855    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:20.684855    8616 round_trippers.go:580]     Audit-Id: f0731aa2-f681-4095-9df9-31ae0296f5f1
	I0229 02:33:20.684855    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:20.684855    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:20.684855    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:20.684855    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:20.684855    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:20 GMT
	I0229 02:33:20.685787    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1412","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:33:20.686665    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:20.686665    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:20.686665    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:20.686735    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:20.690092    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:33:20.690092    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:20.690092    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:20.690092    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:20.690092    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:20.690092    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:20.690092    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:20 GMT
	I0229 02:33:20.690787    8616 round_trippers.go:580]     Audit-Id: e6a55ab3-bfcb-4d8f-b109-6df3b8fb1430
	I0229 02:33:20.691055    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:33:21.182829    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:33:21.183057    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:21.183057    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:21.183057    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:21.190242    8616 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 02:33:21.191186    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:21.191186    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:21.191186    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:21.191186    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:21.191186    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:21.191186    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:21 GMT
	I0229 02:33:21.191186    8616 round_trippers.go:580]     Audit-Id: 60e0aab7-0fed-4979-81a3-ccb473a24a5b
	I0229 02:33:21.191186    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1412","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:33:21.192242    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:21.192242    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:21.192242    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:21.192242    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:21.195851    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:33:21.195851    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:21.195851    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:21.195851    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:21.195851    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:21.195851    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:21 GMT
	I0229 02:33:21.195851    8616 round_trippers.go:580]     Audit-Id: 088f214b-b488-45d9-92a0-e5b9f9c5e51e
	I0229 02:33:21.195851    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:21.196679    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:33:21.197116    8616 pod_ready.go:102] pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:21.683553    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:33:21.684994    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:21.685129    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:21.685201    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:21.692076    8616 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 02:33:21.692076    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:21.692076    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:21.692076    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:21.692076    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:21.692076    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:21.692076    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:21 GMT
	I0229 02:33:21.692076    8616 round_trippers.go:580]     Audit-Id: 2dd6bb16-764a-4f97-a9e2-7ae423dd328d
	I0229 02:33:21.692076    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1412","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:33:21.692758    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:21.692758    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:21.692758    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:21.692758    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:21.696632    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:33:21.697086    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:21.697086    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:21.697086    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:21.697086    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:21 GMT
	I0229 02:33:21.697086    8616 round_trippers.go:580]     Audit-Id: e4f62f57-edd0-403f-99ca-ef43617ae21e
	I0229 02:33:21.697086    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:21.697086    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:21.697213    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:33:22.169618    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:33:22.169618    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:22.169618    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:22.169618    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:22.176117    8616 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 02:33:22.176117    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:22.176117    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:22 GMT
	I0229 02:33:22.176117    8616 round_trippers.go:580]     Audit-Id: 2828c570-3834-44b8-8a8c-985c871654fc
	I0229 02:33:22.176117    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:22.176117    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:22.176117    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:22.176117    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:22.176341    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1412","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:33:22.177093    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:22.177093    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:22.177093    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:22.177093    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:22.180477    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:33:22.180691    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:22.180691    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:22 GMT
	I0229 02:33:22.180758    8616 round_trippers.go:580]     Audit-Id: 638cfcba-eaee-4990-ba6b-53882456d2d5
	I0229 02:33:22.180758    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:22.180758    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:22.180758    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:22.180758    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:22.181027    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:33:22.670065    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:33:22.670065    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:22.670131    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:22.670131    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:22.676021    8616 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:33:22.676096    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:22.676096    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:22.676096    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:22.676685    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:22 GMT
	I0229 02:33:22.677545    8616 round_trippers.go:580]     Audit-Id: 0968c15e-d99a-4d9c-ae4b-6fe7f74c33bb
	I0229 02:33:22.677624    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:22.677624    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:22.678006    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1412","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:33:22.678654    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:22.678654    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:22.678654    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:22.678654    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:22.686317    8616 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 02:33:22.686317    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:22.686317    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:22.686317    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:22.686317    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:22 GMT
	I0229 02:33:22.686317    8616 round_trippers.go:580]     Audit-Id: a2e54b23-e1c9-4b3f-bec2-e238a314004d
	I0229 02:33:22.686317    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:22.686317    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:22.687290    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:33:23.170139    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:33:23.170244    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:23.170244    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:23.170244    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:23.173283    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:33:23.173283    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:23.173283    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:23.173283    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:23.173283    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:23 GMT
	I0229 02:33:23.173283    8616 round_trippers.go:580]     Audit-Id: 8fd37297-2aa8-485e-8320-64ab2ef25ff1
	I0229 02:33:23.173283    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:23.173283    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:23.174278    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1412","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:33:23.175197    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:23.175321    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:23.175321    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:23.175321    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:23.178021    8616 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:33:23.178833    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:23.178833    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:23.178833    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:23.178833    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:23 GMT
	I0229 02:33:23.178833    8616 round_trippers.go:580]     Audit-Id: b56bf877-58bd-44ef-a6c1-9fd3a4df6066
	I0229 02:33:23.178833    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:23.178833    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:23.179028    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:33:23.672752    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:33:23.672752    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:23.672868    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:23.672868    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:23.681458    8616 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0229 02:33:23.681458    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:23.681458    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:23.682000    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:23.682000    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:23.682000    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:23 GMT
	I0229 02:33:23.682000    8616 round_trippers.go:580]     Audit-Id: 4317ba35-8011-48cd-aebc-1b37af12fec5
	I0229 02:33:23.682000    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:23.682212    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1412","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:33:23.683059    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:23.683059    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:23.683059    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:23.683116    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:23.687174    8616 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:33:23.687174    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:23.687174    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:23.688054    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:23.688054    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:23.688054    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:23.688054    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:23 GMT
	I0229 02:33:23.688054    8616 round_trippers.go:580]     Audit-Id: 7f03e89e-99b0-4676-b3d7-362194d6d48e
	I0229 02:33:23.688345    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:33:23.688974    8616 pod_ready.go:102] pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:24.170133    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:33:24.170283    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:24.170283    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:24.170283    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:24.175482    8616 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 02:33:24.175547    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:24.175547    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:24.175547    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:24.175547    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:24.175547    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:24.175547    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:24 GMT
	I0229 02:33:24.175547    8616 round_trippers.go:580]     Audit-Id: f7525f90-0aed-4f16-a858-adb91d99aa39
	I0229 02:33:24.175547    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1412","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:33:24.176567    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:24.176637    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:24.176637    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:24.176637    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:24.179900    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:33:24.179900    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:24.180862    8616 round_trippers.go:580]     Audit-Id: c2252ebd-5035-4804-956a-d41e14d3d5c9
	I0229 02:33:24.180862    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:24.180862    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:24.180862    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:24.180862    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:24.180862    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:24 GMT
	I0229 02:33:24.181401    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:33:24.671851    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:33:24.671921    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:24.671921    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:24.671921    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:24.675726    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:33:24.676135    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:24.676135    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:24.676135    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:24.676135    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:24.676135    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:24.676135    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:24 GMT
	I0229 02:33:24.676135    8616 round_trippers.go:580]     Audit-Id: f543a381-febd-4a95-9d4c-971e9961a0b8
	I0229 02:33:24.676627    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1412","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:33:24.677344    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:24.677344    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:24.677344    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:24.677344    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:24.681020    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:33:24.681097    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:24.681168    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:24.681168    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:24.681168    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:24.681168    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:24.681168    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:24 GMT
	I0229 02:33:24.681168    8616 round_trippers.go:580]     Audit-Id: 6d8ad7e7-539c-4705-a3bc-1751e1d5ca49
	I0229 02:33:24.681453    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:33:25.173437    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:33:25.173718    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:25.173718    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:25.173718    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:25.177838    8616 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:33:25.177838    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:25.178833    8616 round_trippers.go:580]     Audit-Id: 510c8b0b-db93-484a-80ff-1fc943c7f93d
	I0229 02:33:25.178833    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:25.178833    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:25.178833    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:25.178833    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:25.178833    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:25 GMT
	I0229 02:33:25.179303    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1412","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:33:25.180012    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:25.180012    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:25.180012    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:25.180012    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:25.183181    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:33:25.183181    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:25.183181    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:25.183181    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:25.183181    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:25.183181    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:25.183181    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:25 GMT
	I0229 02:33:25.183181    8616 round_trippers.go:580]     Audit-Id: 5c82e626-e70e-4cff-b107-d8c331f6865c
	I0229 02:33:25.183181    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:33:25.678096    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:33:25.678261    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:25.678261    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:25.678261    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:25.685294    8616 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 02:33:25.685294    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:25.685294    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:25.685294    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:25.685294    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:25.685294    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:25.685294    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:25 GMT
	I0229 02:33:25.685294    8616 round_trippers.go:580]     Audit-Id: 27650986-adf3-42ed-b7c1-5f9bdf8397e7
	I0229 02:33:25.685966    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1412","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:33:25.686995    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:25.687023    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:25.687023    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:25.687023    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:25.689437    8616 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:33:25.689437    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:25.689437    8616 round_trippers.go:580]     Audit-Id: b5051581-2b99-4a09-bbfe-ab8560ad4fe3
	I0229 02:33:25.689437    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:25.690348    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:25.690348    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:25.690348    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:25.690348    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:25 GMT
	I0229 02:33:25.690619    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:33:25.690943    8616 pod_ready.go:102] pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:26.178558    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:33:26.178645    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:26.178645    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:26.178645    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:26.182240    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:33:26.183241    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:26.183260    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:26 GMT
	I0229 02:33:26.183260    8616 round_trippers.go:580]     Audit-Id: 766ba9ce-cf83-46a9-8e8c-c6ce0a927ecf
	I0229 02:33:26.183260    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:26.183260    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:26.183260    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:26.183260    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:26.183375    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1412","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:33:26.184236    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:26.184236    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:26.184236    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:26.184236    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:26.189682    8616 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 02:33:26.189682    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:26.189682    8616 round_trippers.go:580]     Audit-Id: 21405aa9-6ef8-4a59-8961-09b5eba1d3d4
	I0229 02:33:26.189682    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:26.189682    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:26.189682    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:26.189682    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:26.189682    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:26 GMT
	I0229 02:33:26.189682    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:33:26.680024    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:33:26.680884    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:26.680884    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:26.680884    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:26.688834    8616 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 02:33:26.688834    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:26.688834    8616 round_trippers.go:580]     Audit-Id: 90ebced1-0e9f-4ec1-8795-b28140161363
	I0229 02:33:26.688834    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:26.688834    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:26.688834    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:26.688834    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:26.688834    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:26 GMT
	I0229 02:33:26.688834    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1412","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:33:26.690196    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:26.690196    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:26.690196    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:26.690196    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:26.694785    8616 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:33:26.694785    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:26.694785    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:26.694785    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:26 GMT
	I0229 02:33:26.694785    8616 round_trippers.go:580]     Audit-Id: 6799dd0c-4e3e-4c36-9bbd-7f07a87d57ef
	I0229 02:33:26.694785    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:26.694785    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:26.694785    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:26.695095    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:33:27.179722    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:33:27.179802    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:27.179802    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:27.179802    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:27.183995    8616 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:33:27.184330    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:27.184330    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:27.184330    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:27.184330    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:27.184330    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:27.184330    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:27 GMT
	I0229 02:33:27.184330    8616 round_trippers.go:580]     Audit-Id: 63bedb85-ae4f-4978-b14a-445a47e34491
	I0229 02:33:27.184757    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1412","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:33:27.185471    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:27.185581    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:27.185581    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:27.185581    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:27.190911    8616 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 02:33:27.190911    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:27.190911    8616 round_trippers.go:580]     Audit-Id: 6515358a-f9cb-4c97-be85-7d04bf05798d
	I0229 02:33:27.190911    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:27.190911    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:27.190911    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:27.190911    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:27.190911    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:27 GMT
	I0229 02:33:27.191539    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:33:27.680749    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:33:27.680749    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:27.680749    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:27.680749    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:27.685537    8616 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:33:27.685537    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:27.685537    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:27.685537    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:27.685537    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:27 GMT
	I0229 02:33:27.685537    8616 round_trippers.go:580]     Audit-Id: ac102e37-3448-4d64-8826-4d9a3f07b997
	I0229 02:33:27.685537    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:27.685537    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:27.686066    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1412","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:33:27.687134    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:27.687220    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:27.687220    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:27.687220    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:27.690427    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:33:27.690650    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:27.690650    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:27.690650    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:27.690650    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:27.690650    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:27 GMT
	I0229 02:33:27.690727    8616 round_trippers.go:580]     Audit-Id: f2b99a0f-a8f9-4235-9423-f9d95518c84e
	I0229 02:33:27.690727    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:27.690904    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:33:27.691431    8616 pod_ready.go:102] pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:28.182414    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:33:28.182628    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:28.182628    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:28.182628    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:28.185686    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:33:28.185686    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:28.185686    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:28.185686    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:28.185686    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:28 GMT
	I0229 02:33:28.185686    8616 round_trippers.go:580]     Audit-Id: 79ddd0f9-7dd1-4903-839c-c1597567d663
	I0229 02:33:28.185686    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:28.185686    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:28.186681    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1412","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:33:28.186681    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:28.186681    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:28.186681    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:28.186681    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:28.190214    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:33:28.190214    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:28.190214    8616 round_trippers.go:580]     Audit-Id: 8fa0262f-53f4-49db-97b3-382499e5d6a6
	I0229 02:33:28.190214    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:28.190214    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:28.190214    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:28.190214    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:28.190214    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:28 GMT
	I0229 02:33:28.190917    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:33:28.671676    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:33:28.671777    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:28.671777    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:28.671777    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:28.675708    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:33:28.676309    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:28.676309    8616 round_trippers.go:580]     Audit-Id: dec9875e-b3ec-4b17-8c8a-e4baa0dca6aa
	I0229 02:33:28.676309    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:28.676309    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:28.676309    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:28.676309    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:28.676448    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:28 GMT
	I0229 02:33:28.676797    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1412","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:33:28.677834    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:28.677920    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:28.677920    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:28.677920    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:28.691542    8616 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0229 02:33:28.691542    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:28.692542    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:28.692542    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:28.692542    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:28.692542    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:28 GMT
	I0229 02:33:28.692542    8616 round_trippers.go:580]     Audit-Id: 0a886436-f1ba-4d8d-99e8-9dc1f7679f51
	I0229 02:33:28.692542    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:28.693039    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:33:29.170167    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:33:29.170167    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:29.170240    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:29.170240    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:29.173934    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:33:29.173934    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:29.173934    8616 round_trippers.go:580]     Audit-Id: e29f7252-5a1b-47a7-8811-54dc9dbe3b4e
	I0229 02:33:29.173934    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:29.173934    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:29.173934    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:29.173934    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:29.173934    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:29 GMT
	I0229 02:33:29.174946    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1412","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:33:29.175793    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:29.175793    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:29.175851    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:29.175851    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:29.180930    8616 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 02:33:29.180930    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:29.180930    8616 round_trippers.go:580]     Audit-Id: 99976aa6-6621-4d65-b03b-84d1e85918d1
	I0229 02:33:29.180930    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:29.180930    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:29.180930    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:29.180930    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:29.180930    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:29 GMT
	I0229 02:33:29.180930    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:33:29.685028    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:33:29.685128    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:29.685128    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:29.685128    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:29.689653    8616 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:33:29.689762    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:29.689762    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:29.689762    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:29 GMT
	I0229 02:33:29.689762    8616 round_trippers.go:580]     Audit-Id: c09659d4-344d-4b9d-8225-2eb38eb40791
	I0229 02:33:29.689762    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:29.689762    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:29.689762    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:29.689967    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1412","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:33:29.690707    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:29.690707    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:29.690771    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:29.690771    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:29.694502    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:33:29.694502    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:29.694502    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:29.694502    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:29.694502    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:29.694502    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:29.694502    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:29 GMT
	I0229 02:33:29.694502    8616 round_trippers.go:580]     Audit-Id: f87459e1-624c-4c47-865b-f7acb83e1d40
	I0229 02:33:29.694502    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:33:29.695221    8616 pod_ready.go:102] pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:30.185674    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:33:30.185674    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:30.185674    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:30.185674    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:30.191580    8616 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 02:33:30.191580    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:30.191580    8616 round_trippers.go:580]     Audit-Id: f7a3286e-0971-4b91-9aec-47354774a63f
	I0229 02:33:30.191580    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:30.191580    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:30.191580    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:30.191580    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:30.191580    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:30 GMT
	I0229 02:33:30.192265    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1412","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:33:30.193028    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:30.193028    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:30.193103    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:30.193103    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:30.198243    8616 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 02:33:30.198243    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:30.198243    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:30.198243    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:30.198243    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:30.198243    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:30 GMT
	I0229 02:33:30.198243    8616 round_trippers.go:580]     Audit-Id: 9156bbba-e52c-445a-856c-a40181a807cc
	I0229 02:33:30.198243    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:30.199007    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:33:30.683447    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:33:30.683822    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:30.683822    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:30.683822    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:30.690328    8616 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 02:33:30.690328    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:30.690328    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:30 GMT
	I0229 02:33:30.690328    8616 round_trippers.go:580]     Audit-Id: a40b347c-75f7-4fc8-a25b-d6c3b5bc12f9
	I0229 02:33:30.690328    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:30.690328    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:30.690328    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:30.690328    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:30.690328    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1435","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6492 chars]
	I0229 02:33:30.691236    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:30.691236    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:30.691236    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:30.691236    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:30.694950    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:33:30.694950    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:30.694950    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:30.694950    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:30.694950    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:30.694950    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:30 GMT
	I0229 02:33:30.694950    8616 round_trippers.go:580]     Audit-Id: f8a16a17-053e-4e5f-a027-b7d794cd6d33
	I0229 02:33:30.694950    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:30.694950    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:33:30.695597    8616 pod_ready.go:92] pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace has status "Ready":"True"
	I0229 02:33:30.695597    8616 pod_ready.go:81] duration metric: took 22.5258783s waiting for pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace to be "Ready" ...
	I0229 02:33:30.695597    8616 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:33:30.695597    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-314500
	I0229 02:33:30.695597    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:30.695597    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:30.695597    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:30.699241    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:33:30.699292    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:30.699326    8616 round_trippers.go:580]     Audit-Id: 7ff37d70-37cf-4f1a-9f47-ad08306f8828
	I0229 02:33:30.699326    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:30.699326    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:30.699326    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:30.699326    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:30.699326    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:30 GMT
	I0229 02:33:30.699576    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-314500","namespace":"kube-system","uid":"b4f5f225-c7b2-4d26-a0ad-f09b2045ea14","resourceVersion":"1409","creationTimestamp":"2024-02-29T02:33:03Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.2.252:2379","kubernetes.io/config.hash":"b583592d76a92080553678603be807ce","kubernetes.io/config.mirror":"b583592d76a92080553678603be807ce","kubernetes.io/config.seen":"2024-02-29T02:32:57.667230131Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:33:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5853 chars]
	I0229 02:33:30.700035    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:30.700035    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:30.700035    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:30.700035    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:30.703546    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:33:30.703679    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:30.703679    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:30.703679    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:30.703679    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:30.703679    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:30.703679    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:30 GMT
	I0229 02:33:30.703760    8616 round_trippers.go:580]     Audit-Id: df029a45-fd6f-4122-be04-47150835818a
	I0229 02:33:30.703917    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:33:30.704279    8616 pod_ready.go:92] pod "etcd-multinode-314500" in "kube-system" namespace has status "Ready":"True"
	I0229 02:33:30.704279    8616 pod_ready.go:81] duration metric: took 8.6816ms waiting for pod "etcd-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:33:30.704279    8616 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:33:30.704279    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-314500
	I0229 02:33:30.704279    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:30.704526    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:30.704526    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:30.707598    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:33:30.707598    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:30.707598    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:30.707598    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:30 GMT
	I0229 02:33:30.707598    8616 round_trippers.go:580]     Audit-Id: 7baf798f-ab34-47b0-8d84-6b964c7e7439
	I0229 02:33:30.707598    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:30.707598    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:30.707598    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:30.707598    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-314500","namespace":"kube-system","uid":"d64133c2-8b75-4b12-b270-cbd060c1374e","resourceVersion":"1408","creationTimestamp":"2024-02-29T02:33:04Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.2.252:8443","kubernetes.io/config.hash":"462233dfd1884b55b9575973e0f20340","kubernetes.io/config.mirror":"462233dfd1884b55b9575973e0f20340","kubernetes.io/config.seen":"2024-02-29T02:32:57.667231431Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:33:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7391 chars]
	I0229 02:33:30.708534    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:30.708534    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:30.708534    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:30.708534    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:30.711365    8616 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:33:30.711365    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:30.711365    8616 round_trippers.go:580]     Audit-Id: 4e364db5-a0ad-4d3f-b353-5e4ae2f6c27a
	I0229 02:33:30.711365    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:30.711365    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:30.712378    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:30.712378    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:30.712378    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:30 GMT
	I0229 02:33:30.712518    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:33:30.712518    8616 pod_ready.go:92] pod "kube-apiserver-multinode-314500" in "kube-system" namespace has status "Ready":"True"
	I0229 02:33:30.712518    8616 pod_ready.go:81] duration metric: took 8.2389ms waiting for pod "kube-apiserver-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:33:30.712518    8616 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:33:30.713096    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-314500
	I0229 02:33:30.713096    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:30.713096    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:30.713096    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:30.719618    8616 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 02:33:30.719682    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:30.719682    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:30.719682    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:30 GMT
	I0229 02:33:30.719682    8616 round_trippers.go:580]     Audit-Id: fbe011f5-e4ce-41da-bf20-0995ac6408de
	I0229 02:33:30.719682    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:30.719682    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:30.719682    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:30.719682    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-314500","namespace":"kube-system","uid":"58e57902-e113-44a9-b5b5-4aba2ba13491","resourceVersion":"1426","creationTimestamp":"2024-02-29T02:15:52Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"46f4a0cce9ca64e19c1ad09d6f30ce1e","kubernetes.io/config.mirror":"46f4a0cce9ca64e19c1ad09d6f30ce1e","kubernetes.io/config.seen":"2024-02-29T02:15:52.221398986Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:15:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7171 chars]
	I0229 02:33:30.720368    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:30.720368    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:30.720368    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:30.720368    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:30.723911    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:33:30.723911    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:30.723911    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:30.723911    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:30.723911    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:30.723911    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:30.723911    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:30 GMT
	I0229 02:33:30.723911    8616 round_trippers.go:580]     Audit-Id: 4267bf1b-e056-4dc9-ad12-55a2a8f102eb
	I0229 02:33:30.723911    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:33:30.723911    8616 pod_ready.go:92] pod "kube-controller-manager-multinode-314500" in "kube-system" namespace has status "Ready":"True"
	I0229 02:33:30.723911    8616 pod_ready.go:81] duration metric: took 11.3919ms waiting for pod "kube-controller-manager-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:33:30.723911    8616 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4gbrl" in "kube-system" namespace to be "Ready" ...
	I0229 02:33:30.723911    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4gbrl
	I0229 02:33:30.723911    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:30.723911    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:30.723911    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:30.726913    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:33:30.726913    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:30.726913    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:30.726913    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:30.726913    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:30 GMT
	I0229 02:33:30.726913    8616 round_trippers.go:580]     Audit-Id: 206b851e-1bf1-4ddd-968d-10a82badfb72
	I0229 02:33:30.726913    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:30.726913    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:30.727914    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4gbrl","generateName":"kube-proxy-","namespace":"kube-system","uid":"accb56cb-79ee-4f16-b05e-91bf554c4a60","resourceVersion":"606","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"99934fe5-0d72-4e83-8f59-4a0b59969008","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"99934fe5-0d72-4e83-8f59-4a0b59969008\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I0229 02:33:30.727914    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:33:30.727914    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:30.727914    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:30.727914    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:30.731561    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:33:30.731598    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:30.731598    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:30 GMT
	I0229 02:33:30.731598    8616 round_trippers.go:580]     Audit-Id: ac9ebde2-40ad-450d-b0ce-b508a4c94f72
	I0229 02:33:30.731598    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:30.731598    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:30.731598    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:30.731598    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:30.731598    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d","resourceVersion":"1213","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_28_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager": [truncated 3818 chars]
	I0229 02:33:30.732153    8616 pod_ready.go:92] pod "kube-proxy-4gbrl" in "kube-system" namespace has status "Ready":"True"
	I0229 02:33:30.732153    8616 pod_ready.go:81] duration metric: took 8.242ms waiting for pod "kube-proxy-4gbrl" in "kube-system" namespace to be "Ready" ...
	I0229 02:33:30.732153    8616 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6r6j4" in "kube-system" namespace to be "Ready" ...
	I0229 02:33:30.887613    8616 request.go:629] Waited for 155.4514ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6r6j4
	I0229 02:33:30.887613    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6r6j4
	I0229 02:33:30.887613    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:30.887613    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:30.887613    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:30.892731    8616 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 02:33:30.892731    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:30.892731    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:30.892731    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:31 GMT
	I0229 02:33:30.892822    8616 round_trippers.go:580]     Audit-Id: 3d405a59-c70c-4779-99f5-9078c7a12046
	I0229 02:33:30.892822    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:30.892822    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:30.892822    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:30.892822    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6r6j4","generateName":"kube-proxy-","namespace":"kube-system","uid":"2b84b22d-3786-4f9e-a23a-c7cfc93bb671","resourceVersion":"1324","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"99934fe5-0d72-4e83-8f59-4a0b59969008","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"99934fe5-0d72-4e83-8f59-4a0b59969008\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5735 chars]
	I0229 02:33:31.089725    8616 request.go:629] Waited for 195.7541ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:31.089833    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:31.089833    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:31.089900    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:31.089900    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:31.094157    8616 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:33:31.094157    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:31.094157    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:31.094157    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:31.094157    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:31.094157    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:31 GMT
	I0229 02:33:31.094157    8616 round_trippers.go:580]     Audit-Id: 8776e2a1-39e0-4e72-a7a1-13dd8d1b3d5d
	I0229 02:33:31.094157    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:31.094935    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:33:31.095413    8616 pod_ready.go:92] pod "kube-proxy-6r6j4" in "kube-system" namespace has status "Ready":"True"
	I0229 02:33:31.095413    8616 pod_ready.go:81] duration metric: took 363.2397ms waiting for pod "kube-proxy-6r6j4" in "kube-system" namespace to be "Ready" ...
	I0229 02:33:31.095563    8616 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zvlt2" in "kube-system" namespace to be "Ready" ...
	I0229 02:33:31.291735    8616 request.go:629] Waited for 196.0768ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zvlt2
	I0229 02:33:31.292084    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zvlt2
	I0229 02:33:31.292139    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:31.292139    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:31.292175    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:31.296558    8616 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:33:31.296558    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:31.296558    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:31.296558    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:31 GMT
	I0229 02:33:31.296558    8616 round_trippers.go:580]     Audit-Id: 73b3b30d-08a6-40ad-9f34-17fbae17f483
	I0229 02:33:31.296558    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:31.296558    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:31.296558    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:31.300992    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-zvlt2","generateName":"kube-proxy-","namespace":"kube-system","uid":"0f29dabe-dc06-4460-bf19-55470247dbcc","resourceVersion":"1230","creationTimestamp":"2024-02-29T02:28:50Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"99934fe5-0d72-4e83-8f59-4a0b59969008","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:28:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"99934fe5-0d72-4e83-8f59-4a0b59969008\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5534 chars]
	I0229 02:33:31.493974    8616 request.go:629] Waited for 192.5152ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.252:8443/api/v1/nodes/multinode-314500-m03
	I0229 02:33:31.494312    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500-m03
	I0229 02:33:31.494529    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:31.494529    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:31.494529    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:31.502481    8616 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 02:33:31.502481    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:31.502481    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:31.502481    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:31 GMT
	I0229 02:33:31.502481    8616 round_trippers.go:580]     Audit-Id: a68e3af2-1892-4e9f-877a-792690b6d17b
	I0229 02:33:31.502481    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:31.502481    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:31.502481    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:31.502481    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m03","uid":"e3855f89-f53a-45b3-8e99-79bb2f21bdb0","resourceVersion":"1265","creationTimestamp":"2024-02-29T02:28:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_28_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:28:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3648 chars]
	I0229 02:33:31.502481    8616 pod_ready.go:92] pod "kube-proxy-zvlt2" in "kube-system" namespace has status "Ready":"True"
	I0229 02:33:31.503455    8616 pod_ready.go:81] duration metric: took 407.8693ms waiting for pod "kube-proxy-zvlt2" in "kube-system" namespace to be "Ready" ...
	I0229 02:33:31.503455    8616 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:33:31.698016    8616 request.go:629] Waited for 194.4465ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-314500
	I0229 02:33:31.698016    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-314500
	I0229 02:33:31.698016    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:31.698016    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:31.698016    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:31.701538    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:33:31.701538    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:31.701538    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:31.702536    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:31.702536    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:31 GMT
	I0229 02:33:31.702536    8616 round_trippers.go:580]     Audit-Id: 81ba3f45-d7dd-426d-8344-f44b1fc13168
	I0229 02:33:31.702536    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:31.702536    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:31.702596    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-314500","namespace":"kube-system","uid":"31fcecc6-17de-43a6-892d-37cd915de64b","resourceVersion":"1428","creationTimestamp":"2024-02-29T02:15:52Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"3d9a79ff068a0922524863a8caa5053a","kubernetes.io/config.mirror":"3d9a79ff068a0922524863a8caa5053a","kubernetes.io/config.seen":"2024-02-29T02:15:52.221399886Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:15:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4901 chars]
	I0229 02:33:31.885617    8616 request.go:629] Waited for 182.442ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:31.885617    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:33:31.885926    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:31.885926    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:31.885926    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:31.893319    8616 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 02:33:31.893319    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:31.893319    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:32 GMT
	I0229 02:33:31.893319    8616 round_trippers.go:580]     Audit-Id: 94f48d41-7a8d-4329-abca-bd485747ee7d
	I0229 02:33:31.893319    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:31.893319    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:31.893319    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:31.893319    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:31.893319    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:33:31.894345    8616 pod_ready.go:92] pod "kube-scheduler-multinode-314500" in "kube-system" namespace has status "Ready":"True"
	I0229 02:33:31.894345    8616 pod_ready.go:81] duration metric: took 390.8682ms waiting for pod "kube-scheduler-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:33:31.894345    8616 pod_ready.go:38] duration metric: took 23.8865707s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:33:31.894345    8616 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:33:31.903479    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:31.928462    8616 command_runner.go:130] > 1889
	I0229 02:33:31.928612    8616 api_server.go:72] duration metric: took 24.1238705s to wait for apiserver process to appear ...
	I0229 02:33:31.928612    8616 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:33:31.928612    8616 api_server.go:253] Checking apiserver healthz at https://172.19.2.252:8443/healthz ...
	I0229 02:33:31.935614    8616 api_server.go:279] https://172.19.2.252:8443/healthz returned 200:
	ok
	I0229 02:33:31.936430    8616 round_trippers.go:463] GET https://172.19.2.252:8443/version
	I0229 02:33:31.936503    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:31.936526    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:31.936526    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:31.938235    8616 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 02:33:31.938235    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:31.938235    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:32 GMT
	I0229 02:33:31.938235    8616 round_trippers.go:580]     Audit-Id: 328c9125-0194-437e-ba83-4af0fe46e07e
	I0229 02:33:31.938235    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:31.938235    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:31.938235    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:31.938235    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:31.938235    8616 round_trippers.go:580]     Content-Length: 264
	I0229 02:33:31.938235    8616 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0229 02:33:31.938235    8616 api_server.go:141] control plane version: v1.28.4
	I0229 02:33:31.938235    8616 api_server.go:131] duration metric: took 9.6226ms to wait for apiserver health ...
	I0229 02:33:31.938235    8616 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:33:32.091327    8616 request.go:629] Waited for 152.8014ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods
	I0229 02:33:32.091481    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods
	I0229 02:33:32.091481    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:32.091481    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:32.091481    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:32.097546    8616 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 02:33:32.097546    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:32.097546    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:32.097546    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:32.097546    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:32.097546    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:32.097546    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:32 GMT
	I0229 02:33:32.097546    8616 round_trippers.go:580]     Audit-Id: 6cf627fc-abd3-4e66-b96a-62a04574edd5
	I0229 02:33:32.099769    8616 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1439"},"items":[{"metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1435","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82041 chars]
	I0229 02:33:32.105263    8616 system_pods.go:59] 12 kube-system pods found
	I0229 02:33:32.105263    8616 system_pods.go:61] "coredns-5dd5756b68-8g6tg" [ef7fb259-9f24-4645-9eff-2b16f6789e1b] Running
	I0229 02:33:32.105263    8616 system_pods.go:61] "etcd-multinode-314500" [b4f5f225-c7b2-4d26-a0ad-f09b2045ea14] Running
	I0229 02:33:32.105263    8616 system_pods.go:61] "kindnet-6r7b8" [402c3ac1-05a9-45f1-aa7d-c0fb8ced6c87] Running
	I0229 02:33:32.105263    8616 system_pods.go:61] "kindnet-7g9t8" [1bbebf1c-4e33-40cb-915e-6df5982dbf0c] Running
	I0229 02:33:32.105263    8616 system_pods.go:61] "kindnet-t9r77" [4620d417-744c-4049-82ab-79d1ee7f047c] Running
	I0229 02:33:32.105263    8616 system_pods.go:61] "kube-apiserver-multinode-314500" [d64133c2-8b75-4b12-b270-cbd060c1374e] Running
	I0229 02:33:32.105263    8616 system_pods.go:61] "kube-controller-manager-multinode-314500" [58e57902-e113-44a9-b5b5-4aba2ba13491] Running
	I0229 02:33:32.105263    8616 system_pods.go:61] "kube-proxy-4gbrl" [accb56cb-79ee-4f16-b05e-91bf554c4a60] Running
	I0229 02:33:32.105263    8616 system_pods.go:61] "kube-proxy-6r6j4" [2b84b22d-3786-4f9e-a23a-c7cfc93bb671] Running
	I0229 02:33:32.105263    8616 system_pods.go:61] "kube-proxy-zvlt2" [0f29dabe-dc06-4460-bf19-55470247dbcc] Running
	I0229 02:33:32.105263    8616 system_pods.go:61] "kube-scheduler-multinode-314500" [31fcecc6-17de-43a6-892d-37cd915de64b] Running
	I0229 02:33:32.105263    8616 system_pods.go:61] "storage-provisioner" [9780520b-8ff9-408a-ab6f-41b63790ccd1] Running
	I0229 02:33:32.105263    8616 system_pods.go:74] duration metric: took 167.0192ms to wait for pod list to return data ...
	I0229 02:33:32.105263    8616 default_sa.go:34] waiting for default service account to be created ...
	I0229 02:33:32.291604    8616 request.go:629] Waited for 186.3306ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.252:8443/api/v1/namespaces/default/serviceaccounts
	I0229 02:33:32.291907    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/default/serviceaccounts
	I0229 02:33:32.291907    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:32.291907    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:32.291907    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:32.296258    8616 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:33:32.296258    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:32.296258    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:32 GMT
	I0229 02:33:32.296258    8616 round_trippers.go:580]     Audit-Id: 04bf67be-b059-46d9-aa1f-97c0f9b7c684
	I0229 02:33:32.296258    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:32.296258    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:32.296258    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:32.296258    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:32.296258    8616 round_trippers.go:580]     Content-Length: 262
	I0229 02:33:32.296258    8616 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1439"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"a442432a-e4e1-4889-bfa8-e3967acc17f0","resourceVersion":"330","creationTimestamp":"2024-02-29T02:16:04Z"}}]}
	I0229 02:33:32.296258    8616 default_sa.go:45] found service account: "default"
	I0229 02:33:32.296258    8616 default_sa.go:55] duration metric: took 190.9838ms for default service account to be created ...
	I0229 02:33:32.296258    8616 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 02:33:32.493841    8616 request.go:629] Waited for 197.4987ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods
	I0229 02:33:32.493917    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods
	I0229 02:33:32.493917    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:32.493917    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:32.493917    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:32.500551    8616 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 02:33:32.500551    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:32.500805    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:32 GMT
	I0229 02:33:32.500805    8616 round_trippers.go:580]     Audit-Id: 72c36661-05e4-4c3d-820c-6564ec9b8f1e
	I0229 02:33:32.500805    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:32.500805    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:32.500805    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:32.500805    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:32.502310    8616 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1439"},"items":[{"metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1435","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82041 chars]
	I0229 02:33:32.506116    8616 system_pods.go:86] 12 kube-system pods found
	I0229 02:33:32.506149    8616 system_pods.go:89] "coredns-5dd5756b68-8g6tg" [ef7fb259-9f24-4645-9eff-2b16f6789e1b] Running
	I0229 02:33:32.506149    8616 system_pods.go:89] "etcd-multinode-314500" [b4f5f225-c7b2-4d26-a0ad-f09b2045ea14] Running
	I0229 02:33:32.506172    8616 system_pods.go:89] "kindnet-6r7b8" [402c3ac1-05a9-45f1-aa7d-c0fb8ced6c87] Running
	I0229 02:33:32.506172    8616 system_pods.go:89] "kindnet-7g9t8" [1bbebf1c-4e33-40cb-915e-6df5982dbf0c] Running
	I0229 02:33:32.506172    8616 system_pods.go:89] "kindnet-t9r77" [4620d417-744c-4049-82ab-79d1ee7f047c] Running
	I0229 02:33:32.506172    8616 system_pods.go:89] "kube-apiserver-multinode-314500" [d64133c2-8b75-4b12-b270-cbd060c1374e] Running
	I0229 02:33:32.506172    8616 system_pods.go:89] "kube-controller-manager-multinode-314500" [58e57902-e113-44a9-b5b5-4aba2ba13491] Running
	I0229 02:33:32.506172    8616 system_pods.go:89] "kube-proxy-4gbrl" [accb56cb-79ee-4f16-b05e-91bf554c4a60] Running
	I0229 02:33:32.506172    8616 system_pods.go:89] "kube-proxy-6r6j4" [2b84b22d-3786-4f9e-a23a-c7cfc93bb671] Running
	I0229 02:33:32.506172    8616 system_pods.go:89] "kube-proxy-zvlt2" [0f29dabe-dc06-4460-bf19-55470247dbcc] Running
	I0229 02:33:32.506172    8616 system_pods.go:89] "kube-scheduler-multinode-314500" [31fcecc6-17de-43a6-892d-37cd915de64b] Running
	I0229 02:33:32.506172    8616 system_pods.go:89] "storage-provisioner" [9780520b-8ff9-408a-ab6f-41b63790ccd1] Running
	I0229 02:33:32.506172    8616 system_pods.go:126] duration metric: took 209.9027ms to wait for k8s-apps to be running ...
	I0229 02:33:32.506172    8616 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 02:33:32.514501    8616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:33:32.541079    8616 system_svc.go:56] duration metric: took 34.8076ms WaitForService to wait for kubelet.
	I0229 02:33:32.541135    8616 kubeadm.go:581] duration metric: took 24.736304s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 02:33:32.541170    8616 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:33:32.698202    8616 request.go:629] Waited for 156.8239ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.252:8443/api/v1/nodes
	I0229 02:33:32.698202    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes
	I0229 02:33:32.698202    8616 round_trippers.go:469] Request Headers:
	I0229 02:33:32.698202    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:33:32.698202    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:33:32.701719    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:33:32.702552    8616 round_trippers.go:577] Response Headers:
	I0229 02:33:32.702552    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:33:32.702552    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:33:32.702552    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:33:32.702552    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:33:32.702552    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:33:32 GMT
	I0229 02:33:32.702552    8616 round_trippers.go:580]     Audit-Id: ae6b1f33-41ea-4e67-841e-07c310425373
	I0229 02:33:32.702939    8616 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1439"},"items":[{"metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 14740 chars]
	I0229 02:33:32.703224    8616 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:33:32.703768    8616 node_conditions.go:123] node cpu capacity is 2
	I0229 02:33:32.703768    8616 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:33:32.703768    8616 node_conditions.go:123] node cpu capacity is 2
	I0229 02:33:32.703768    8616 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:33:32.703768    8616 node_conditions.go:123] node cpu capacity is 2
	I0229 02:33:32.703768    8616 node_conditions.go:105] duration metric: took 162.5883ms to run NodePressure ...
	I0229 02:33:32.703768    8616 start.go:228] waiting for startup goroutines ...
	I0229 02:33:32.703768    8616 start.go:233] waiting for cluster config update ...
	I0229 02:33:32.703768    8616 start.go:242] writing updated cluster config ...
	I0229 02:33:32.717928    8616 config.go:182] Loaded profile config "multinode-314500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 02:33:32.718166    8616 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\config.json ...
	I0229 02:33:32.721055    8616 out.go:177] * Starting worker node multinode-314500-m02 in cluster multinode-314500
	I0229 02:33:32.721640    8616 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 02:33:32.721707    8616 cache.go:56] Caching tarball of preloaded images
	I0229 02:33:32.722161    8616 preload.go:174] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 02:33:32.722319    8616 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0229 02:33:32.722493    8616 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\config.json ...
	I0229 02:33:32.732008    8616 start.go:365] acquiring machines lock for multinode-314500-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 02:33:32.732973    8616 start.go:369] acquired machines lock for "multinode-314500-m02" in 965.2µs
	I0229 02:33:32.732973    8616 start.go:96] Skipping create...Using existing machine configuration
	I0229 02:33:32.732973    8616 fix.go:54] fixHost starting: m02
	I0229 02:33:32.732973    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:33:34.750583    8616 main.go:141] libmachine: [stdout =====>] : Off
	
	I0229 02:33:34.750772    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:33:34.750772    8616 fix.go:102] recreateIfNeeded on multinode-314500-m02: state=Stopped err=<nil>
	W0229 02:33:34.750772    8616 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 02:33:34.752044    8616 out.go:177] * Restarting existing hyperv VM for "multinode-314500-m02" ...
	I0229 02:33:34.752723    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-314500-m02
	I0229 02:33:37.486143    8616 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:33:37.487002    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:33:37.487002    8616 main.go:141] libmachine: Waiting for host to start...
	I0229 02:33:37.487002    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:33:39.611868    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:33:39.612020    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:33:39.612120    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:33:41.957419    8616 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:33:41.957419    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:33:42.962618    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:33:45.015396    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:33:45.015396    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:33:45.015396    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:33:47.388521    8616 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:33:47.388521    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:33:48.398888    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:33:50.484866    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:33:50.484939    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:33:50.485007    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:33:52.820581    8616 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:33:52.820581    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:33:53.832381    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:33:55.906547    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:33:55.906754    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:33:55.906754    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:33:58.306070    8616 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:33:58.306070    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:33:59.318519    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:34:01.385231    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:34:01.385325    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:34:01.385410    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:34:03.768211    8616 main.go:141] libmachine: [stdout =====>] : 172.19.4.42
	
	I0229 02:34:03.768245    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:34:03.770748    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:34:05.744842    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:34:05.745040    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:34:05.745040    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:34:08.174899    8616 main.go:141] libmachine: [stdout =====>] : 172.19.4.42
	
	I0229 02:34:08.174899    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:34:08.174899    8616 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\config.json ...
	I0229 02:34:08.177999    8616 machine.go:88] provisioning docker machine ...
	I0229 02:34:08.177999    8616 buildroot.go:166] provisioning hostname "multinode-314500-m02"
	I0229 02:34:08.178093    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:34:10.185305    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:34:10.185305    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:34:10.185868    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:34:12.569346    8616 main.go:141] libmachine: [stdout =====>] : 172.19.4.42
	
	I0229 02:34:12.569346    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:34:12.575950    8616 main.go:141] libmachine: Using SSH client type: native
	I0229 02:34:12.576500    8616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.4.42 22 <nil> <nil>}
	I0229 02:34:12.576500    8616 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-314500-m02 && echo "multinode-314500-m02" | sudo tee /etc/hostname
	I0229 02:34:12.737532    8616 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-314500-m02
	
	I0229 02:34:12.737594    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:34:14.744299    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:34:14.744299    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:34:14.744299    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:34:17.165796    8616 main.go:141] libmachine: [stdout =====>] : 172.19.4.42
	
	I0229 02:34:17.165796    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:34:17.170129    8616 main.go:141] libmachine: Using SSH client type: native
	I0229 02:34:17.170747    8616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.4.42 22 <nil> <nil>}
	I0229 02:34:17.170747    8616 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-314500-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-314500-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-314500-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:34:17.315870    8616 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:34:17.315870    8616 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0229 02:34:17.315870    8616 buildroot.go:174] setting up certificates
	I0229 02:34:17.315870    8616 provision.go:83] configureAuth start
	I0229 02:34:17.315870    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:34:19.321336    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:34:19.321606    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:34:19.321645    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:34:21.715674    8616 main.go:141] libmachine: [stdout =====>] : 172.19.4.42
	
	I0229 02:34:21.715674    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:34:21.715763    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:34:23.708178    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:34:23.708296    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:34:23.708296    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:34:26.135623    8616 main.go:141] libmachine: [stdout =====>] : 172.19.4.42
	
	I0229 02:34:26.135623    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:34:26.135623    8616 provision.go:138] copyHostCerts
	I0229 02:34:26.135623    8616 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0229 02:34:26.135623    8616 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0229 02:34:26.135623    8616 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0229 02:34:26.136841    8616 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0229 02:34:26.137785    8616 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0229 02:34:26.138097    8616 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0229 02:34:26.138097    8616 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0229 02:34:26.138409    8616 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0229 02:34:26.139242    8616 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0229 02:34:26.139466    8616 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0229 02:34:26.139466    8616 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0229 02:34:26.139776    8616 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1675 bytes)
	I0229 02:34:26.140569    8616 provision.go:112] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-314500-m02 san=[172.19.4.42 172.19.4.42 localhost 127.0.0.1 minikube multinode-314500-m02]
	I0229 02:34:26.511651    8616 provision.go:172] copyRemoteCerts
	I0229 02:34:26.521877    8616 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:34:26.521877    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:34:28.545613    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:34:28.545814    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:34:28.545943    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:34:30.943651    8616 main.go:141] libmachine: [stdout =====>] : 172.19.4.42
	
	I0229 02:34:30.943688    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:34:30.944133    8616 sshutil.go:53] new ssh client: &{IP:172.19.4.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m02\id_rsa Username:docker}
	I0229 02:34:31.052039    8616 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5298081s)
	I0229 02:34:31.052039    8616 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0229 02:34:31.052137    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 02:34:31.104110    8616 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0229 02:34:31.104110    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1237 bytes)
	I0229 02:34:31.152185    8616 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0229 02:34:31.152543    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 02:34:31.200201    8616 provision.go:86] duration metric: configureAuth took 13.8835557s
	I0229 02:34:31.200333    8616 buildroot.go:189] setting minikube options for container-runtime
	I0229 02:34:31.200876    8616 config.go:182] Loaded profile config "multinode-314500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 02:34:31.200945    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:34:33.178544    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:34:33.178634    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:34:33.178634    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:34:35.598402    8616 main.go:141] libmachine: [stdout =====>] : 172.19.4.42
	
	I0229 02:34:35.598402    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:34:35.602794    8616 main.go:141] libmachine: Using SSH client type: native
	I0229 02:34:35.603193    8616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.4.42 22 <nil> <nil>}
	I0229 02:34:35.603193    8616 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 02:34:35.734405    8616 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0229 02:34:35.734464    8616 buildroot.go:70] root file system type: tmpfs
	I0229 02:34:35.734647    8616 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 02:34:35.734712    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:34:37.770855    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:34:37.771868    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:34:37.772012    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:34:40.236696    8616 main.go:141] libmachine: [stdout =====>] : 172.19.4.42
	
	I0229 02:34:40.236696    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:34:40.242554    8616 main.go:141] libmachine: Using SSH client type: native
	I0229 02:34:40.242659    8616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.4.42 22 <nil> <nil>}
	I0229 02:34:40.242659    8616 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.2.252"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 02:34:40.403218    8616 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.2.252
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 02:34:40.403300    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:34:42.423744    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:34:42.423940    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:34:42.423940    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:34:44.822113    8616 main.go:141] libmachine: [stdout =====>] : 172.19.4.42
	
	I0229 02:34:44.822113    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:34:44.826010    8616 main.go:141] libmachine: Using SSH client type: native
	I0229 02:34:44.826445    8616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.4.42 22 <nil> <nil>}
	I0229 02:34:44.826445    8616 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 02:34:46.048096    8616 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0229 02:34:46.048096    8616 machine.go:91] provisioned docker machine in 37.8679836s
	I0229 02:34:46.048096    8616 start.go:300] post-start starting for "multinode-314500-m02" (driver="hyperv")
	I0229 02:34:46.048096    8616 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:34:46.057481    8616 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:34:46.057481    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:34:48.051565    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:34:48.051565    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:34:48.051565    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:34:50.456654    8616 main.go:141] libmachine: [stdout =====>] : 172.19.4.42
	
	I0229 02:34:50.456654    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:34:50.457247    8616 sshutil.go:53] new ssh client: &{IP:172.19.4.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m02\id_rsa Username:docker}
	I0229 02:34:50.572779    8616 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5149482s)
	I0229 02:34:50.582324    8616 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:34:50.588909    8616 command_runner.go:130] > NAME=Buildroot
	I0229 02:34:50.589000    8616 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0229 02:34:50.589000    8616 command_runner.go:130] > ID=buildroot
	I0229 02:34:50.589000    8616 command_runner.go:130] > VERSION_ID=2023.02.9
	I0229 02:34:50.589000    8616 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0229 02:34:50.589373    8616 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 02:34:50.589373    8616 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0229 02:34:50.589373    8616 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0229 02:34:50.590319    8616 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem -> 33122.pem in /etc/ssl/certs
	I0229 02:34:50.590319    8616 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem -> /etc/ssl/certs/33122.pem
	I0229 02:34:50.599497    8616 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 02:34:50.618486    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem --> /etc/ssl/certs/33122.pem (1708 bytes)
	I0229 02:34:50.665303    8616 start.go:303] post-start completed in 4.6169494s
	I0229 02:34:50.665303    8616 fix.go:56] fixHost completed within 1m17.9279783s
	I0229 02:34:50.665839    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:34:52.680989    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:34:52.680989    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:34:52.680989    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:34:55.095148    8616 main.go:141] libmachine: [stdout =====>] : 172.19.4.42
	
	I0229 02:34:55.095578    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:34:55.101377    8616 main.go:141] libmachine: Using SSH client type: native
	I0229 02:34:55.101944    8616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.4.42 22 <nil> <nil>}
	I0229 02:34:55.102047    8616 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 02:34:55.237557    8616 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709174095.407990813
	
	I0229 02:34:55.237557    8616 fix.go:206] guest clock: 1709174095.407990813
	I0229 02:34:55.237557    8616 fix.go:219] Guest: 2024-02-29 02:34:55.407990813 +0000 UTC Remote: 2024-02-29 02:34:50.6653035 +0000 UTC m=+224.146046201 (delta=4.742687313s)
	I0229 02:34:55.237651    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:34:57.244690    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:34:57.244690    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:34:57.245713    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:34:59.641651    8616 main.go:141] libmachine: [stdout =====>] : 172.19.4.42
	
	I0229 02:34:59.641651    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:34:59.645575    8616 main.go:141] libmachine: Using SSH client type: native
	I0229 02:34:59.645575    8616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.4.42 22 <nil> <nil>}
	I0229 02:34:59.646103    8616 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709174095
	I0229 02:34:59.791370    8616 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Feb 29 02:34:55 UTC 2024
	
	I0229 02:34:59.791451    8616 fix.go:226] clock set: Thu Feb 29 02:34:55 UTC 2024
	 (err=<nil>)
	I0229 02:34:59.791451    8616 start.go:83] releasing machines lock for "multinode-314500-m02", held for 1m27.0536172s
	I0229 02:34:59.791865    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:35:01.816553    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:35:01.816633    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:35:01.816864    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:35:04.252265    8616 main.go:141] libmachine: [stdout =====>] : 172.19.4.42
	
	I0229 02:35:04.252265    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:35:04.253036    8616 out.go:177] * Found network options:
	I0229 02:35:04.253631    8616 out.go:177]   - NO_PROXY=172.19.2.252
	W0229 02:35:04.254258    8616 proxy.go:119] fail to check proxy env: Error ip not in block
	I0229 02:35:04.254865    8616 out.go:177]   - NO_PROXY=172.19.2.252
	W0229 02:35:04.255334    8616 proxy.go:119] fail to check proxy env: Error ip not in block
	W0229 02:35:04.256584    8616 proxy.go:119] fail to check proxy env: Error ip not in block
	I0229 02:35:04.258364    8616 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:35:04.258901    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:35:04.266180    8616 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0229 02:35:04.266180    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:35:06.323069    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:35:06.323151    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:35:06.323229    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:35:06.332004    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:35:06.332004    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:35:06.332004    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:35:08.788989    8616 main.go:141] libmachine: [stdout =====>] : 172.19.4.42
	
	I0229 02:35:08.789375    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:35:08.789947    8616 sshutil.go:53] new ssh client: &{IP:172.19.4.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m02\id_rsa Username:docker}
	I0229 02:35:08.816539    8616 main.go:141] libmachine: [stdout =====>] : 172.19.4.42
	
	I0229 02:35:08.816539    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:35:08.816943    8616 sshutil.go:53] new ssh client: &{IP:172.19.4.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m02\id_rsa Username:docker}
	I0229 02:35:08.960290    8616 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0229 02:35:08.960397    8616 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.7017705s)
	I0229 02:35:08.960397    8616 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0229 02:35:08.960397    8616 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.693955s)
	W0229 02:35:08.960397    8616 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 02:35:08.969921    8616 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:35:09.006107    8616 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0229 02:35:09.006247    8616 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 02:35:09.006247    8616 start.go:475] detecting cgroup driver to use...
	I0229 02:35:09.006247    8616 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:35:09.041398    8616 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0229 02:35:09.052029    8616 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0229 02:35:09.087917    8616 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 02:35:09.112773    8616 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 02:35:09.126324    8616 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 02:35:09.160253    8616 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 02:35:09.191047    8616 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 02:35:09.223837    8616 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 02:35:09.256461    8616 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:35:09.288195    8616 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 02:35:09.317734    8616 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:35:09.336020    8616 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0229 02:35:09.345004    8616 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:35:09.377917    8616 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:35:09.576300    8616 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 02:35:09.610574    8616 start.go:475] detecting cgroup driver to use...
	I0229 02:35:09.620032    8616 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 02:35:09.649829    8616 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0229 02:35:09.649829    8616 command_runner.go:130] > [Unit]
	I0229 02:35:09.649829    8616 command_runner.go:130] > Description=Docker Application Container Engine
	I0229 02:35:09.649829    8616 command_runner.go:130] > Documentation=https://docs.docker.com
	I0229 02:35:09.649829    8616 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0229 02:35:09.649829    8616 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0229 02:35:09.649829    8616 command_runner.go:130] > StartLimitBurst=3
	I0229 02:35:09.649829    8616 command_runner.go:130] > StartLimitIntervalSec=60
	I0229 02:35:09.649829    8616 command_runner.go:130] > [Service]
	I0229 02:35:09.649829    8616 command_runner.go:130] > Type=notify
	I0229 02:35:09.649829    8616 command_runner.go:130] > Restart=on-failure
	I0229 02:35:09.649829    8616 command_runner.go:130] > Environment=NO_PROXY=172.19.2.252
	I0229 02:35:09.649829    8616 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0229 02:35:09.649829    8616 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0229 02:35:09.649829    8616 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0229 02:35:09.649829    8616 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0229 02:35:09.649829    8616 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0229 02:35:09.649829    8616 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0229 02:35:09.649829    8616 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0229 02:35:09.649829    8616 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0229 02:35:09.649829    8616 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0229 02:35:09.649829    8616 command_runner.go:130] > ExecStart=
	I0229 02:35:09.649829    8616 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0229 02:35:09.649829    8616 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0229 02:35:09.649829    8616 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0229 02:35:09.649829    8616 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0229 02:35:09.649829    8616 command_runner.go:130] > LimitNOFILE=infinity
	I0229 02:35:09.649829    8616 command_runner.go:130] > LimitNPROC=infinity
	I0229 02:35:09.649829    8616 command_runner.go:130] > LimitCORE=infinity
	I0229 02:35:09.649829    8616 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0229 02:35:09.649829    8616 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0229 02:35:09.649829    8616 command_runner.go:130] > TasksMax=infinity
	I0229 02:35:09.649829    8616 command_runner.go:130] > TimeoutStartSec=0
	I0229 02:35:09.649829    8616 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0229 02:35:09.649829    8616 command_runner.go:130] > Delegate=yes
	I0229 02:35:09.649829    8616 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0229 02:35:09.649829    8616 command_runner.go:130] > KillMode=process
	I0229 02:35:09.649829    8616 command_runner.go:130] > [Install]
	I0229 02:35:09.650358    8616 command_runner.go:130] > WantedBy=multi-user.target
	I0229 02:35:09.659957    8616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:35:09.695384    8616 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 02:35:09.731554    8616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:35:09.767001    8616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 02:35:09.803681    8616 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 02:35:09.854987    8616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 02:35:09.880114    8616 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:35:09.918019    8616 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0229 02:35:09.926830    8616 ssh_runner.go:195] Run: which cri-dockerd
	I0229 02:35:09.933433    8616 command_runner.go:130] > /usr/bin/cri-dockerd
	I0229 02:35:09.941946    8616 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 02:35:09.962499    8616 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0229 02:35:10.007581    8616 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 02:35:10.208129    8616 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 02:35:10.395689    8616 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 02:35:10.395689    8616 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 02:35:10.439846    8616 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:35:10.631132    8616 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 02:35:12.165738    8616 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.534521s)
	I0229 02:35:12.174292    8616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0229 02:35:12.209397    8616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 02:35:12.244315    8616 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0229 02:35:12.434256    8616 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0229 02:35:12.636646    8616 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:35:12.825118    8616 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0229 02:35:12.868091    8616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 02:35:12.907466    8616 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:35:13.112770    8616 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0229 02:35:13.214538    8616 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0229 02:35:13.226316    8616 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0229 02:35:13.234143    8616 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0229 02:35:13.234971    8616 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0229 02:35:13.234971    8616 command_runner.go:130] > Device: 0,22	Inode: 846         Links: 1
	I0229 02:35:13.234971    8616 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0229 02:35:13.234971    8616 command_runner.go:130] > Access: 2024-02-29 02:35:13.308585367 +0000
	I0229 02:35:13.234971    8616 command_runner.go:130] > Modify: 2024-02-29 02:35:13.308585367 +0000
	I0229 02:35:13.234971    8616 command_runner.go:130] > Change: 2024-02-29 02:35:13.311585518 +0000
	I0229 02:35:13.234971    8616 command_runner.go:130] >  Birth: -
	I0229 02:35:13.236272    8616 start.go:543] Will wait 60s for crictl version
	I0229 02:35:13.244945    8616 ssh_runner.go:195] Run: which crictl
	I0229 02:35:13.251462    8616 command_runner.go:130] > /usr/bin/crictl
	I0229 02:35:13.263961    8616 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 02:35:13.335047    8616 command_runner.go:130] > Version:  0.1.0
	I0229 02:35:13.335047    8616 command_runner.go:130] > RuntimeName:  docker
	I0229 02:35:13.335047    8616 command_runner.go:130] > RuntimeVersion:  24.0.7
	I0229 02:35:13.335047    8616 command_runner.go:130] > RuntimeApiVersion:  v1
	I0229 02:35:13.337239    8616 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0229 02:35:13.346872    8616 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 02:35:13.386184    8616 command_runner.go:130] > 24.0.7
	I0229 02:35:13.393296    8616 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 02:35:13.432170    8616 command_runner.go:130] > 24.0.7
	I0229 02:35:13.433903    8616 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0229 02:35:13.434837    8616 out.go:177]   - env NO_PROXY=172.19.2.252
	I0229 02:35:13.435525    8616 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0229 02:35:13.440189    8616 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0229 02:35:13.440189    8616 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0229 02:35:13.440189    8616 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0229 02:35:13.440189    8616 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:a6:a3:c1 Flags:up|broadcast|multicast|running}
	I0229 02:35:13.443265    8616 ip.go:210] interface addr: fe80::fc78:4865:5cac:d448/64
	I0229 02:35:13.443328    8616 ip.go:210] interface addr: 172.19.0.1/20
	I0229 02:35:13.451562    8616 ssh_runner.go:195] Run: grep 172.19.0.1	host.minikube.internal$ /etc/hosts
	I0229 02:35:13.459037    8616 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:35:13.484314    8616 certs.go:56] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500 for IP: 172.19.4.42
	I0229 02:35:13.484314    8616 certs.go:190] acquiring lock for shared ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:35:13.484994    8616 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0229 02:35:13.485356    8616 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0229 02:35:13.485497    8616 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0229 02:35:13.485632    8616 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0229 02:35:13.485903    8616 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0229 02:35:13.485903    8616 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0229 02:35:13.485903    8616 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3312.pem (1338 bytes)
	W0229 02:35:13.486516    8616 certs.go:433] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3312_empty.pem, impossibly tiny 0 bytes
	I0229 02:35:13.486662    8616 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0229 02:35:13.486803    8616 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0229 02:35:13.487012    8616 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0229 02:35:13.487249    8616 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0229 02:35:13.487514    8616 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem (1708 bytes)
	I0229 02:35:13.487712    8616 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:35:13.487780    8616 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3312.pem -> /usr/share/ca-certificates/3312.pem
	I0229 02:35:13.487918    8616 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem -> /usr/share/ca-certificates/33122.pem
	I0229 02:35:13.488717    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 02:35:13.538088    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 02:35:13.585631    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 02:35:13.636242    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 02:35:13.682471    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 02:35:13.729011    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3312.pem --> /usr/share/ca-certificates/3312.pem (1338 bytes)
	I0229 02:35:13.775082    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem --> /usr/share/ca-certificates/33122.pem (1708 bytes)
	I0229 02:35:13.829717    8616 ssh_runner.go:195] Run: openssl version
	I0229 02:35:13.838974    8616 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0229 02:35:13.847910    8616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 02:35:13.879231    8616 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:35:13.886507    8616 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 29 00:45 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:35:13.886507    8616 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 00:45 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:35:13.894613    8616 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:35:13.904497    8616 command_runner.go:130] > b5213941
	I0229 02:35:13.915276    8616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 02:35:13.947399    8616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3312.pem && ln -fs /usr/share/ca-certificates/3312.pem /etc/ssl/certs/3312.pem"
	I0229 02:35:13.979064    8616 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3312.pem
	I0229 02:35:13.985905    8616 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 29 00:59 /usr/share/ca-certificates/3312.pem
	I0229 02:35:13.985905    8616 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 00:59 /usr/share/ca-certificates/3312.pem
	I0229 02:35:13.995961    8616 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3312.pem
	I0229 02:35:14.004701    8616 command_runner.go:130] > 51391683
	I0229 02:35:14.013505    8616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3312.pem /etc/ssl/certs/51391683.0"
	I0229 02:35:14.045752    8616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/33122.pem && ln -fs /usr/share/ca-certificates/33122.pem /etc/ssl/certs/33122.pem"
	I0229 02:35:14.078041    8616 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/33122.pem
	I0229 02:35:14.085032    8616 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 29 00:59 /usr/share/ca-certificates/33122.pem
	I0229 02:35:14.085434    8616 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 00:59 /usr/share/ca-certificates/33122.pem
	I0229 02:35:14.096068    8616 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/33122.pem
	I0229 02:35:14.106025    8616 command_runner.go:130] > 3ec20f2e
	I0229 02:35:14.114897    8616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/33122.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 02:35:14.148019    8616 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 02:35:14.154020    8616 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 02:35:14.155273    8616 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 02:35:14.162822    8616 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0229 02:35:14.200708    8616 command_runner.go:130] > cgroupfs
	I0229 02:35:14.201039    8616 cni.go:84] Creating CNI manager for ""
	I0229 02:35:14.201077    8616 cni.go:136] 3 nodes found, recommending kindnet
	I0229 02:35:14.201077    8616 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 02:35:14.201077    8616 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.4.42 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-314500 NodeName:multinode-314500-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.2.252"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.4.42 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 02:35:14.201546    8616 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.4.42
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-314500-m02"
	  kubeletExtraArgs:
	    node-ip: 172.19.4.42
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.2.252"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 02:35:14.201625    8616 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-314500-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.4.42
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-314500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 02:35:14.210931    8616 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 02:35:14.230791    8616 command_runner.go:130] > kubeadm
	I0229 02:35:14.230791    8616 command_runner.go:130] > kubectl
	I0229 02:35:14.230791    8616 command_runner.go:130] > kubelet
	I0229 02:35:14.230791    8616 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 02:35:14.239536    8616 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0229 02:35:14.258685    8616 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0229 02:35:14.290010    8616 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 02:35:14.333493    8616 ssh_runner.go:195] Run: grep 172.19.2.252	control-plane.minikube.internal$ /etc/hosts
	I0229 02:35:14.340674    8616 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.2.252	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:35:14.363980    8616 host.go:66] Checking if "multinode-314500" exists ...
	I0229 02:35:14.364814    8616 config.go:182] Loaded profile config "multinode-314500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 02:35:14.364814    8616 start.go:304] JoinCluster: &{Name:multinode-314500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-314500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.19.2.252 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.4.42 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.19.5.92 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:35:14.365199    8616 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0229 02:35:14.365199    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:35:16.376772    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:35:16.376772    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:35:16.376870    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:35:18.786907    8616 main.go:141] libmachine: [stdout =====>] : 172.19.2.252
	
	I0229 02:35:18.787577    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:35:18.788287    8616 sshutil.go:53] new ssh client: &{IP:172.19.2.252 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\id_rsa Username:docker}
	I0229 02:35:19.004198    8616 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token yduind.hic39kg8cey3clzl --discovery-token-ca-cert-hash sha256:9c722bf1323b6c4442b9327af3863f0d7e41785d89e27c3b473d4929b028e022 
	I0229 02:35:19.004274    8616 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (4.638816s)
	I0229 02:35:19.004274    8616 start.go:317] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:172.19.4.42 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0229 02:35:19.004274    8616 host.go:66] Checking if "multinode-314500" exists ...
	I0229 02:35:19.013885    8616 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-314500-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0229 02:35:19.013885    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:35:21.034526    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:35:21.034834    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:35:21.035001    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:35:23.479960    8616 main.go:141] libmachine: [stdout =====>] : 172.19.2.252
	
	I0229 02:35:23.480659    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:35:23.480907    8616 sshutil.go:53] new ssh client: &{IP:172.19.2.252 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\id_rsa Username:docker}
	I0229 02:35:23.673674    8616 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0229 02:35:23.759090    8616 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-6r7b8, kube-system/kube-proxy-4gbrl
	I0229 02:35:26.779309    8616 command_runner.go:130] > node/multinode-314500-m02 cordoned
	I0229 02:35:26.779460    8616 command_runner.go:130] > pod "busybox-5b5d89c9d6-826w2" has DeletionTimestamp older than 1 seconds, skipping
	I0229 02:35:26.779460    8616 command_runner.go:130] > node/multinode-314500-m02 drained
	I0229 02:35:26.779596    8616 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-314500-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (7.7651425s)
	I0229 02:35:26.779596    8616 node.go:108] successfully drained node "m02"
	I0229 02:35:26.781002    8616 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 02:35:26.782184    8616 kapi.go:59] client config for multinode-314500: &rest.Config{Host:"https://172.19.2.252:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2480600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 02:35:26.783493    8616 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0229 02:35:26.783493    8616 round_trippers.go:463] DELETE https://172.19.2.252:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:35:26.783493    8616 round_trippers.go:469] Request Headers:
	I0229 02:35:26.783493    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:35:26.783493    8616 round_trippers.go:473]     Content-Type: application/json
	I0229 02:35:26.783493    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:35:26.801299    8616 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0229 02:35:26.801299    8616 round_trippers.go:577] Response Headers:
	I0229 02:35:26.801299    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:35:26.801299    8616 round_trippers.go:580]     Content-Length: 171
	I0229 02:35:26.801299    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:35:26 GMT
	I0229 02:35:26.801299    8616 round_trippers.go:580]     Audit-Id: f4cbfc60-81ed-4dfd-8742-6b7ab3f2c466
	I0229 02:35:26.801299    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:35:26.801299    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:35:26.801299    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:35:26.801805    8616 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-314500-m02","kind":"nodes","uid":"3beb5105-3e5d-41d2-b159-2c9cf0a9228d"}}
	I0229 02:35:26.801883    8616 node.go:124] successfully deleted node "m02"
	I0229 02:35:26.801954    8616 start.go:321] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:172.19.4.42 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0229 02:35:26.802020    8616 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.19.4.42 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0229 02:35:26.802086    8616 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token yduind.hic39kg8cey3clzl --discovery-token-ca-cert-hash sha256:9c722bf1323b6c4442b9327af3863f0d7e41785d89e27c3b473d4929b028e022 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-314500-m02"
	I0229 02:35:27.018395    8616 command_runner.go:130] ! W0229 02:35:27.189896    1322 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0229 02:35:27.522932    8616 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:35:29.389010    8616 command_runner.go:130] > [preflight] Running pre-flight checks
	I0229 02:35:29.389010    8616 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0229 02:35:29.389010    8616 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0229 02:35:29.389010    8616 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:35:29.389010    8616 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:35:29.389010    8616 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0229 02:35:29.389010    8616 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0229 02:35:29.389010    8616 command_runner.go:130] > This node has joined the cluster:
	I0229 02:35:29.389010    8616 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0229 02:35:29.389010    8616 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0229 02:35:29.389010    8616 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0229 02:35:29.389010    8616 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token yduind.hic39kg8cey3clzl --discovery-token-ca-cert-hash sha256:9c722bf1323b6c4442b9327af3863f0d7e41785d89e27c3b473d4929b028e022 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-314500-m02": (2.5867809s)
	I0229 02:35:29.389010    8616 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0229 02:35:29.679022    8616 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0229 02:35:29.958094    8616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61 minikube.k8s.io/name=multinode-314500 minikube.k8s.io/updated_at=2024_02_29T02_35_29_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:35:30.120756    8616 command_runner.go:130] > node/multinode-314500-m02 labeled
	I0229 02:35:30.120756    8616 command_runner.go:130] > node/multinode-314500-m03 labeled
	I0229 02:35:30.121775    8616 start.go:306] JoinCluster complete in 15.7560843s
	I0229 02:35:30.121775    8616 cni.go:84] Creating CNI manager for ""
	I0229 02:35:30.121775    8616 cni.go:136] 3 nodes found, recommending kindnet
	I0229 02:35:30.129769    8616 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0229 02:35:30.137759    8616 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0229 02:35:30.137759    8616 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0229 02:35:30.137759    8616 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0229 02:35:30.137759    8616 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0229 02:35:30.137759    8616 command_runner.go:130] > Access: 2024-02-29 02:31:42.605077900 +0000
	I0229 02:35:30.137759    8616 command_runner.go:130] > Modify: 2024-02-23 03:39:37.000000000 +0000
	I0229 02:35:30.137759    8616 command_runner.go:130] > Change: 2024-02-29 02:31:30.415000000 +0000
	I0229 02:35:30.137759    8616 command_runner.go:130] >  Birth: -
	I0229 02:35:30.137759    8616 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0229 02:35:30.137759    8616 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0229 02:35:30.180373    8616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0229 02:35:30.615501    8616 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0229 02:35:30.615589    8616 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0229 02:35:30.615589    8616 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0229 02:35:30.615589    8616 command_runner.go:130] > daemonset.apps/kindnet configured
	I0229 02:35:30.620401    8616 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 02:35:30.621082    8616 kapi.go:59] client config for multinode-314500: &rest.Config{Host:"https://172.19.2.252:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2480600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 02:35:30.621698    8616 round_trippers.go:463] GET https://172.19.2.252:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0229 02:35:30.621698    8616 round_trippers.go:469] Request Headers:
	I0229 02:35:30.621698    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:35:30.621698    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:35:30.624229    8616 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:35:30.624229    8616 round_trippers.go:577] Response Headers:
	I0229 02:35:30.624229    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:35:30.624229    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:35:30.625175    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:35:30.625175    8616 round_trippers.go:580]     Content-Length: 292
	I0229 02:35:30.625175    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:35:30 GMT
	I0229 02:35:30.625175    8616 round_trippers.go:580]     Audit-Id: b1b1c4a3-b492-4b47-947e-a8ce3f1179cf
	I0229 02:35:30.625175    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:35:30.625264    8616 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b4cd7015-a823-43da-bf82-ae91c5678262","resourceVersion":"1439","creationTimestamp":"2024-02-29T02:15:51Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0229 02:35:30.625480    8616 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-314500" context rescaled to 1 replicas
	I0229 02:35:30.625557    8616 start.go:223] Will wait 6m0s for node &{Name:m02 IP:172.19.4.42 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0229 02:35:30.626561    8616 out.go:177] * Verifying Kubernetes components...
	I0229 02:35:30.635591    8616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:35:30.664569    8616 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 02:35:30.665154    8616 kapi.go:59] client config for multinode-314500: &rest.Config{Host:"https://172.19.2.252:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2480600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 02:35:30.665364    8616 node_ready.go:35] waiting up to 6m0s for node "multinode-314500-m02" to be "Ready" ...
	I0229 02:35:30.665908    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:35:30.665908    8616 round_trippers.go:469] Request Headers:
	I0229 02:35:30.665958    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:35:30.665958    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:35:30.669132    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:35:30.669490    8616 round_trippers.go:577] Response Headers:
	I0229 02:35:30.669490    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:35:30 GMT
	I0229 02:35:30.669490    8616 round_trippers.go:580]     Audit-Id: c3500321-acd5-4b51-b062-1a144b3f4e4e
	I0229 02:35:30.669490    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:35:30.669490    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:35:30.669490    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:35:30.669490    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:35:30.669821    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"2332789d-7280-427a-9644-fc1ffcfc737d","resourceVersion":"1601","creationTimestamp":"2024-02-29T02:35:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_35_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:35:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3665 chars]
	I0229 02:35:31.178207    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:35:31.178267    8616 round_trippers.go:469] Request Headers:
	I0229 02:35:31.178267    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:35:31.178267    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:35:31.183797    8616 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:35:31.183905    8616 round_trippers.go:577] Response Headers:
	I0229 02:35:31.183905    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:35:31.183905    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:35:31.183905    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:35:31 GMT
	I0229 02:35:31.183905    8616 round_trippers.go:580]     Audit-Id: a478f6d9-1231-421c-8b64-cc953909a6ab
	I0229 02:35:31.183978    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:35:31.183978    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:35:31.184264    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"2332789d-7280-427a-9644-fc1ffcfc737d","resourceVersion":"1601","creationTimestamp":"2024-02-29T02:35:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_35_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:35:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3665 chars]
	I0229 02:35:31.676543    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:35:31.677207    8616 round_trippers.go:469] Request Headers:
	I0229 02:35:31.677207    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:35:31.677207    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:35:31.684164    8616 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 02:35:31.684164    8616 round_trippers.go:577] Response Headers:
	I0229 02:35:31.684164    8616 round_trippers.go:580]     Audit-Id: 9334d926-740d-471b-8070-70f03570a799
	I0229 02:35:31.684164    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:35:31.684164    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:35:31.684164    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:35:31.684164    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:35:31.684164    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:35:31 GMT
	I0229 02:35:31.684164    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"2332789d-7280-427a-9644-fc1ffcfc737d","resourceVersion":"1601","creationTimestamp":"2024-02-29T02:35:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_35_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:35:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3665 chars]
	I0229 02:35:32.177502    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:35:32.177589    8616 round_trippers.go:469] Request Headers:
	I0229 02:35:32.177589    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:35:32.177589    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:35:32.184706    8616 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 02:35:32.184706    8616 round_trippers.go:577] Response Headers:
	I0229 02:35:32.184706    8616 round_trippers.go:580]     Audit-Id: 3c53fee1-c646-4b4c-a6df-c6d74e4e690c
	I0229 02:35:32.184706    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:35:32.184706    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:35:32.184706    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:35:32.184706    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:35:32.184706    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:35:32 GMT
	I0229 02:35:32.185379    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"2332789d-7280-427a-9644-fc1ffcfc737d","resourceVersion":"1601","creationTimestamp":"2024-02-29T02:35:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_35_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:35:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3665 chars]
	I0229 02:35:32.676428    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:35:32.676752    8616 round_trippers.go:469] Request Headers:
	I0229 02:35:32.676752    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:35:32.676752    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:35:32.680728    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:35:32.680728    8616 round_trippers.go:577] Response Headers:
	I0229 02:35:32.681641    8616 round_trippers.go:580]     Audit-Id: c7c38eb2-8bf5-4082-a78e-60280e49cdf6
	I0229 02:35:32.681641    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:35:32.681641    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:35:32.681641    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:35:32.681641    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:35:32.681641    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:35:32 GMT
	I0229 02:35:32.681795    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"2332789d-7280-427a-9644-fc1ffcfc737d","resourceVersion":"1601","creationTimestamp":"2024-02-29T02:35:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_35_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:35:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3665 chars]
	I0229 02:35:32.682323    8616 node_ready.go:58] node "multinode-314500-m02" has status "Ready":"False"
	I0229 02:35:33.174899    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:35:33.174899    8616 round_trippers.go:469] Request Headers:
	I0229 02:35:33.174899    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:35:33.175000    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:35:33.181563    8616 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 02:35:33.181563    8616 round_trippers.go:577] Response Headers:
	I0229 02:35:33.181563    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:35:33.181563    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:35:33.181563    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:35:33.181563    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:35:33 GMT
	I0229 02:35:33.181563    8616 round_trippers.go:580]     Audit-Id: d36d5970-b11d-4357-990c-171a9bc0fbf4
	I0229 02:35:33.181563    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:35:33.182530    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"2332789d-7280-427a-9644-fc1ffcfc737d","resourceVersion":"1601","creationTimestamp":"2024-02-29T02:35:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_35_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:35:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3665 chars]
	I0229 02:35:33.677084    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:35:33.677084    8616 round_trippers.go:469] Request Headers:
	I0229 02:35:33.677084    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:35:33.677170    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:35:33.682365    8616 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 02:35:33.682365    8616 round_trippers.go:577] Response Headers:
	I0229 02:35:33.682365    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:35:33.682365    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:35:33.682365    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:35:33.682365    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:35:33 GMT
	I0229 02:35:33.682365    8616 round_trippers.go:580]     Audit-Id: 6bbb0976-7002-42b7-8d1a-8971676991ec
	I0229 02:35:33.682365    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:35:33.682937    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"2332789d-7280-427a-9644-fc1ffcfc737d","resourceVersion":"1609","creationTimestamp":"2024-02-29T02:35:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_35_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:35:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3923 chars]
	I0229 02:35:33.683077    8616 node_ready.go:49] node "multinode-314500-m02" has status "Ready":"True"
	I0229 02:35:33.683077    8616 node_ready.go:38] duration metric: took 3.017546s waiting for node "multinode-314500-m02" to be "Ready" ...
	I0229 02:35:33.683077    8616 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:35:33.683077    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods
	I0229 02:35:33.683077    8616 round_trippers.go:469] Request Headers:
	I0229 02:35:33.683077    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:35:33.683077    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:35:33.688252    8616 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 02:35:33.688252    8616 round_trippers.go:577] Response Headers:
	I0229 02:35:33.688252    8616 round_trippers.go:580]     Audit-Id: a3ae6185-5d26-44f4-8af2-d48b474a065f
	I0229 02:35:33.688252    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:35:33.688252    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:35:33.688252    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:35:33.688252    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:35:33.688252    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:35:33 GMT
	I0229 02:35:33.689409    8616 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1611"},"items":[{"metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1435","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82895 chars]
	I0229 02:35:33.692389    8616 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace to be "Ready" ...
	I0229 02:35:33.693390    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:35:33.693529    8616 round_trippers.go:469] Request Headers:
	I0229 02:35:33.693529    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:35:33.693529    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:35:33.696869    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:35:33.696869    8616 round_trippers.go:577] Response Headers:
	I0229 02:35:33.696869    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:35:33.696869    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:35:33.696869    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:35:33.696869    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:35:33 GMT
	I0229 02:35:33.696869    8616 round_trippers.go:580]     Audit-Id: d83e8c50-8109-4363-aa9f-f82db9163b9b
	I0229 02:35:33.696869    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:35:33.696869    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1435","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6492 chars]
	I0229 02:35:33.697844    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:35:33.697844    8616 round_trippers.go:469] Request Headers:
	I0229 02:35:33.697844    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:35:33.697844    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:35:33.700881    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:35:33.700881    8616 round_trippers.go:577] Response Headers:
	I0229 02:35:33.700881    8616 round_trippers.go:580]     Audit-Id: f6ca5330-daf7-4e97-af0a-f324dd4ebcc1
	I0229 02:35:33.700881    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:35:33.700881    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:35:33.700881    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:35:33.700881    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:35:33.700881    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:35:33 GMT
	I0229 02:35:33.701406    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:35:33.701621    8616 pod_ready.go:92] pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace has status "Ready":"True"
	I0229 02:35:33.701873    8616 pod_ready.go:81] duration metric: took 8.4833ms waiting for pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace to be "Ready" ...
	I0229 02:35:33.701873    8616 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:35:33.701873    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-314500
	I0229 02:35:33.701873    8616 round_trippers.go:469] Request Headers:
	I0229 02:35:33.701873    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:35:33.701873    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:35:33.704880    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:35:33.704880    8616 round_trippers.go:577] Response Headers:
	I0229 02:35:33.704880    8616 round_trippers.go:580]     Audit-Id: 246d828c-ed5d-43f6-8e12-3d61aaffa3fe
	I0229 02:35:33.704880    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:35:33.704880    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:35:33.704880    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:35:33.704880    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:35:33.704880    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:35:33 GMT
	I0229 02:35:33.704880    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-314500","namespace":"kube-system","uid":"b4f5f225-c7b2-4d26-a0ad-f09b2045ea14","resourceVersion":"1409","creationTimestamp":"2024-02-29T02:33:03Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.2.252:2379","kubernetes.io/config.hash":"b583592d76a92080553678603be807ce","kubernetes.io/config.mirror":"b583592d76a92080553678603be807ce","kubernetes.io/config.seen":"2024-02-29T02:32:57.667230131Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:33:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5853 chars]
	I0229 02:35:33.704880    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:35:33.704880    8616 round_trippers.go:469] Request Headers:
	I0229 02:35:33.704880    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:35:33.704880    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:35:33.707936    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:35:33.707936    8616 round_trippers.go:577] Response Headers:
	I0229 02:35:33.707936    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:35:33.707936    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:35:33.707936    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:35:33 GMT
	I0229 02:35:33.707936    8616 round_trippers.go:580]     Audit-Id: 2a6efd30-e5f1-41c7-86b8-e1aa1a68637a
	I0229 02:35:33.707936    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:35:33.707936    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:35:33.707936    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:35:33.709066    8616 pod_ready.go:92] pod "etcd-multinode-314500" in "kube-system" namespace has status "Ready":"True"
	I0229 02:35:33.709066    8616 pod_ready.go:81] duration metric: took 7.1922ms waiting for pod "etcd-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:35:33.709100    8616 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:35:33.709198    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-314500
	I0229 02:35:33.709198    8616 round_trippers.go:469] Request Headers:
	I0229 02:35:33.709244    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:35:33.709244    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:35:33.713939    8616 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:35:33.713939    8616 round_trippers.go:577] Response Headers:
	I0229 02:35:33.713939    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:35:33.713939    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:35:33.713939    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:35:33 GMT
	I0229 02:35:33.713939    8616 round_trippers.go:580]     Audit-Id: 9fc141b6-08a7-4123-afae-20236b7541b5
	I0229 02:35:33.713939    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:35:33.713939    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:35:33.714517    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-314500","namespace":"kube-system","uid":"d64133c2-8b75-4b12-b270-cbd060c1374e","resourceVersion":"1408","creationTimestamp":"2024-02-29T02:33:04Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.2.252:8443","kubernetes.io/config.hash":"462233dfd1884b55b9575973e0f20340","kubernetes.io/config.mirror":"462233dfd1884b55b9575973e0f20340","kubernetes.io/config.seen":"2024-02-29T02:32:57.667231431Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:33:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7391 chars]
	I0229 02:35:33.714701    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:35:33.714701    8616 round_trippers.go:469] Request Headers:
	I0229 02:35:33.714701    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:35:33.714701    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:35:33.717472    8616 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:35:33.717472    8616 round_trippers.go:577] Response Headers:
	I0229 02:35:33.717472    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:35:33.717472    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:35:33.717472    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:35:33.717472    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:35:33 GMT
	I0229 02:35:33.717472    8616 round_trippers.go:580]     Audit-Id: f8396cee-61a0-4f92-8f6e-ce7eb8018a72
	I0229 02:35:33.717472    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:35:33.717472    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:35:33.718475    8616 pod_ready.go:92] pod "kube-apiserver-multinode-314500" in "kube-system" namespace has status "Ready":"True"
	I0229 02:35:33.718475    8616 pod_ready.go:81] duration metric: took 9.3744ms waiting for pod "kube-apiserver-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:35:33.718475    8616 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:35:33.718475    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-314500
	I0229 02:35:33.718475    8616 round_trippers.go:469] Request Headers:
	I0229 02:35:33.718475    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:35:33.718475    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:35:33.721330    8616 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:35:33.721330    8616 round_trippers.go:577] Response Headers:
	I0229 02:35:33.721330    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:35:33 GMT
	I0229 02:35:33.721330    8616 round_trippers.go:580]     Audit-Id: 4beb0d21-ce5b-4edc-9864-6a4412f15352
	I0229 02:35:33.721330    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:35:33.721330    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:35:33.721330    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:35:33.721330    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:35:33.721330    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-314500","namespace":"kube-system","uid":"58e57902-e113-44a9-b5b5-4aba2ba13491","resourceVersion":"1426","creationTimestamp":"2024-02-29T02:15:52Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"46f4a0cce9ca64e19c1ad09d6f30ce1e","kubernetes.io/config.mirror":"46f4a0cce9ca64e19c1ad09d6f30ce1e","kubernetes.io/config.seen":"2024-02-29T02:15:52.221398986Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:15:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7171 chars]
	I0229 02:35:33.722517    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:35:33.722517    8616 round_trippers.go:469] Request Headers:
	I0229 02:35:33.722517    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:35:33.722517    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:35:33.724218    8616 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 02:35:33.725224    8616 round_trippers.go:577] Response Headers:
	I0229 02:35:33.725224    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:35:33.725269    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:35:33.725269    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:35:33.725269    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:35:33.725269    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:35:33 GMT
	I0229 02:35:33.725269    8616 round_trippers.go:580]     Audit-Id: 54c6cc20-97d9-4245-8256-defa776505b7
	I0229 02:35:33.725269    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:35:33.725888    8616 pod_ready.go:92] pod "kube-controller-manager-multinode-314500" in "kube-system" namespace has status "Ready":"True"
	I0229 02:35:33.725888    8616 pod_ready.go:81] duration metric: took 7.4129ms waiting for pod "kube-controller-manager-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:35:33.725888    8616 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4gbrl" in "kube-system" namespace to be "Ready" ...
	I0229 02:35:33.881606    8616 request.go:629] Waited for 155.7088ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4gbrl
	I0229 02:35:33.881606    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4gbrl
	I0229 02:35:33.881606    8616 round_trippers.go:469] Request Headers:
	I0229 02:35:33.881606    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:35:33.881606    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:35:33.886247    8616 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:35:33.886638    8616 round_trippers.go:577] Response Headers:
	I0229 02:35:33.886638    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:35:33.886638    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:35:33.886638    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:35:34 GMT
	I0229 02:35:33.886638    8616 round_trippers.go:580]     Audit-Id: 1ecbcd08-610e-4052-9c73-16d68f7e7a40
	I0229 02:35:33.886638    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:35:33.886638    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:35:33.886880    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4gbrl","generateName":"kube-proxy-","namespace":"kube-system","uid":"accb56cb-79ee-4f16-b05e-91bf554c4a60","resourceVersion":"1598","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"99934fe5-0d72-4e83-8f59-4a0b59969008","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"99934fe5-0d72-4e83-8f59-4a0b59969008\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5739 chars]
	I0229 02:35:34.082149    8616 request.go:629] Waited for 194.5977ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.252:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:35:34.082443    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:35:34.082443    8616 round_trippers.go:469] Request Headers:
	I0229 02:35:34.082443    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:35:34.082443    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:35:34.086396    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:35:34.086396    8616 round_trippers.go:577] Response Headers:
	I0229 02:35:34.086396    8616 round_trippers.go:580]     Audit-Id: c4843e75-af29-46eb-99fe-2417e489b053
	I0229 02:35:34.086396    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:35:34.086396    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:35:34.086396    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:35:34.086396    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:35:34.086396    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:35:34 GMT
	I0229 02:35:34.086895    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"2332789d-7280-427a-9644-fc1ffcfc737d","resourceVersion":"1609","creationTimestamp":"2024-02-29T02:35:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_35_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:35:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3923 chars]
	I0229 02:35:34.087384    8616 pod_ready.go:92] pod "kube-proxy-4gbrl" in "kube-system" namespace has status "Ready":"True"
	I0229 02:35:34.087384    8616 pod_ready.go:81] duration metric: took 361.4756ms waiting for pod "kube-proxy-4gbrl" in "kube-system" namespace to be "Ready" ...
	I0229 02:35:34.087384    8616 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6r6j4" in "kube-system" namespace to be "Ready" ...
	I0229 02:35:34.282892    8616 request.go:629] Waited for 195.2718ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6r6j4
	I0229 02:35:34.283239    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6r6j4
	I0229 02:35:34.283401    8616 round_trippers.go:469] Request Headers:
	I0229 02:35:34.283451    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:35:34.283451    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:35:34.287821    8616 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:35:34.287821    8616 round_trippers.go:577] Response Headers:
	I0229 02:35:34.288754    8616 round_trippers.go:580]     Audit-Id: 0c91fc20-ec57-4ee1-8755-a21be8607f8a
	I0229 02:35:34.288754    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:35:34.288754    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:35:34.288754    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:35:34.288754    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:35:34.288754    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:35:34 GMT
	I0229 02:35:34.288968    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6r6j4","generateName":"kube-proxy-","namespace":"kube-system","uid":"2b84b22d-3786-4f9e-a23a-c7cfc93bb671","resourceVersion":"1324","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"99934fe5-0d72-4e83-8f59-4a0b59969008","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"99934fe5-0d72-4e83-8f59-4a0b59969008\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5735 chars]
	I0229 02:35:34.486386    8616 request.go:629] Waited for 196.7647ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:35:34.486386    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:35:34.486386    8616 round_trippers.go:469] Request Headers:
	I0229 02:35:34.486386    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:35:34.486386    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:35:34.490843    8616 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:35:34.490843    8616 round_trippers.go:577] Response Headers:
	I0229 02:35:34.491669    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:35:34.491669    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:35:34.491669    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:35:34 GMT
	I0229 02:35:34.491669    8616 round_trippers.go:580]     Audit-Id: e46488fb-8619-4bdf-bdfb-13f581681c9b
	I0229 02:35:34.491669    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:35:34.491669    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:35:34.491883    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:35:34.491973    8616 pod_ready.go:92] pod "kube-proxy-6r6j4" in "kube-system" namespace has status "Ready":"True"
	I0229 02:35:34.491973    8616 pod_ready.go:81] duration metric: took 404.5667ms waiting for pod "kube-proxy-6r6j4" in "kube-system" namespace to be "Ready" ...
	I0229 02:35:34.491973    8616 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zvlt2" in "kube-system" namespace to be "Ready" ...
	I0229 02:35:34.686239    8616 request.go:629] Waited for 194.2554ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zvlt2
	I0229 02:35:34.686610    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zvlt2
	I0229 02:35:34.686610    8616 round_trippers.go:469] Request Headers:
	I0229 02:35:34.687104    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:35:34.687104    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:35:34.690672    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:35:34.690672    8616 round_trippers.go:577] Response Headers:
	I0229 02:35:34.690672    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:35:34.690672    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:35:34.690672    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:35:34.690672    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:35:34.690672    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:35:34 GMT
	I0229 02:35:34.690672    8616 round_trippers.go:580]     Audit-Id: 5cc1d45f-af6e-4316-b27c-2a8b992cf83f
	I0229 02:35:34.691664    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-zvlt2","generateName":"kube-proxy-","namespace":"kube-system","uid":"0f29dabe-dc06-4460-bf19-55470247dbcc","resourceVersion":"1465","creationTimestamp":"2024-02-29T02:28:50Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"99934fe5-0d72-4e83-8f59-4a0b59969008","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:28:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"99934fe5-0d72-4e83-8f59-4a0b59969008\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5759 chars]
	I0229 02:35:34.888381    8616 request.go:629] Waited for 195.974ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.252:8443/api/v1/nodes/multinode-314500-m03
	I0229 02:35:34.888465    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500-m03
	I0229 02:35:34.888465    8616 round_trippers.go:469] Request Headers:
	I0229 02:35:34.888465    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:35:34.888610    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:35:34.892360    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:35:34.892666    8616 round_trippers.go:577] Response Headers:
	I0229 02:35:34.892666    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:35:34.892666    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:35:35 GMT
	I0229 02:35:34.892666    8616 round_trippers.go:580]     Audit-Id: ae69a2cf-17e7-4c91-a0ac-f991d0f2ba3c
	I0229 02:35:34.892751    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:35:34.892751    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:35:34.892751    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:35:34.893078    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m03","uid":"e3855f89-f53a-45b3-8e99-79bb2f21bdb0","resourceVersion":"1596","creationTimestamp":"2024-02-29T02:28:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_35_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:28:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 4404 chars]
	I0229 02:35:34.893203    8616 pod_ready.go:97] node "multinode-314500-m03" hosting pod "kube-proxy-zvlt2" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-314500-m03" has status "Ready":"Unknown"
	I0229 02:35:34.893203    8616 pod_ready.go:81] duration metric: took 401.2079ms waiting for pod "kube-proxy-zvlt2" in "kube-system" namespace to be "Ready" ...
	E0229 02:35:34.893203    8616 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-314500-m03" hosting pod "kube-proxy-zvlt2" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-314500-m03" has status "Ready":"Unknown"
	I0229 02:35:34.893203    8616 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:35:35.092340    8616 request.go:629] Waited for 199.1256ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-314500
	I0229 02:35:35.092340    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-314500
	I0229 02:35:35.092654    8616 round_trippers.go:469] Request Headers:
	I0229 02:35:35.092654    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:35:35.092654    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:35:35.096231    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:35:35.096231    8616 round_trippers.go:577] Response Headers:
	I0229 02:35:35.096231    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:35:35.096231    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:35:35.096231    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:35:35.096231    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:35:35 GMT
	I0229 02:35:35.096231    8616 round_trippers.go:580]     Audit-Id: 41e21236-5a4e-49d1-9f13-11800a0afe46
	I0229 02:35:35.096231    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:35:35.096904    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-314500","namespace":"kube-system","uid":"31fcecc6-17de-43a6-892d-37cd915de64b","resourceVersion":"1428","creationTimestamp":"2024-02-29T02:15:52Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"3d9a79ff068a0922524863a8caa5053a","kubernetes.io/config.mirror":"3d9a79ff068a0922524863a8caa5053a","kubernetes.io/config.seen":"2024-02-29T02:15:52.221399886Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:15:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4901 chars]
	I0229 02:35:35.278480    8616 request.go:629] Waited for 180.7784ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:35:35.278562    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:35:35.278562    8616 round_trippers.go:469] Request Headers:
	I0229 02:35:35.278636    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:35:35.278667    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:35:35.282860    8616 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:35:35.283004    8616 round_trippers.go:577] Response Headers:
	I0229 02:35:35.283004    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:35:35.283004    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:35:35.283088    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:35:35 GMT
	I0229 02:35:35.283088    8616 round_trippers.go:580]     Audit-Id: 1e4f399e-a10e-4cde-8eac-fb5c2187cc19
	I0229 02:35:35.283161    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:35:35.283219    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:35:35.283248    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:35:35.283786    8616 pod_ready.go:92] pod "kube-scheduler-multinode-314500" in "kube-system" namespace has status "Ready":"True"
	I0229 02:35:35.283915    8616 pod_ready.go:81] duration metric: took 390.6905ms waiting for pod "kube-scheduler-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:35:35.283915    8616 pod_ready.go:38] duration metric: took 1.6007495s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:35:35.283915    8616 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 02:35:35.292960    8616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:35:35.319660    8616 system_svc.go:56] duration metric: took 35.7429ms WaitForService to wait for kubelet.
	I0229 02:35:35.319771    8616 kubeadm.go:581] duration metric: took 4.6938168s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 02:35:35.319805    8616 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:35:35.483373    8616 request.go:629] Waited for 163.3254ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.252:8443/api/v1/nodes
	I0229 02:35:35.483481    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes
	I0229 02:35:35.483481    8616 round_trippers.go:469] Request Headers:
	I0229 02:35:35.483481    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:35:35.483570    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:35:35.496384    8616 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0229 02:35:35.496384    8616 round_trippers.go:577] Response Headers:
	I0229 02:35:35.496384    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:35:35.496384    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:35:35 GMT
	I0229 02:35:35.496384    8616 round_trippers.go:580]     Audit-Id: 59963205-ab64-4dde-8a69-025685429f63
	I0229 02:35:35.496384    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:35:35.496384    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:35:35.496851    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:35:35.497202    8616 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1614"},"items":[{"metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15481 chars]
	I0229 02:35:35.498069    8616 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:35:35.498069    8616 node_conditions.go:123] node cpu capacity is 2
	I0229 02:35:35.498155    8616 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:35:35.498155    8616 node_conditions.go:123] node cpu capacity is 2
	I0229 02:35:35.498155    8616 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:35:35.498155    8616 node_conditions.go:123] node cpu capacity is 2
	I0229 02:35:35.498155    8616 node_conditions.go:105] duration metric: took 178.3398ms to run NodePressure ...
	I0229 02:35:35.498155    8616 start.go:228] waiting for startup goroutines ...
	I0229 02:35:35.498252    8616 start.go:242] writing updated cluster config ...
	I0229 02:35:35.515352    8616 config.go:182] Loaded profile config "multinode-314500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 02:35:35.515352    8616 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\config.json ...
	I0229 02:35:35.519491    8616 out.go:177] * Starting worker node multinode-314500-m03 in cluster multinode-314500
	I0229 02:35:35.519913    8616 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 02:35:35.520159    8616 cache.go:56] Caching tarball of preloaded images
	I0229 02:35:35.520159    8616 preload.go:174] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 02:35:35.520159    8616 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0229 02:35:35.520791    8616 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\config.json ...
	I0229 02:35:35.533565    8616 start.go:365] acquiring machines lock for multinode-314500-m03: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 02:35:35.534117    8616 start.go:369] acquired machines lock for "multinode-314500-m03" in 551.5µs
	I0229 02:35:35.534290    8616 start.go:96] Skipping create...Using existing machine configuration
	I0229 02:35:35.534339    8616 fix.go:54] fixHost starting: m03
	I0229 02:35:35.534567    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m03 ).state
	I0229 02:35:37.524717    8616 main.go:141] libmachine: [stdout =====>] : Off
	
	I0229 02:35:37.524717    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:35:37.524717    8616 fix.go:102] recreateIfNeeded on multinode-314500-m03: state=Stopped err=<nil>
	W0229 02:35:37.525365    8616 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 02:35:37.526133    8616 out.go:177] * Restarting existing hyperv VM for "multinode-314500-m03" ...
	I0229 02:35:37.526828    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-314500-m03
	I0229 02:35:40.289135    8616 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:35:40.289135    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:35:40.289135    8616 main.go:141] libmachine: Waiting for host to start...
	I0229 02:35:40.289135    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m03 ).state
	I0229 02:35:42.442162    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:35:42.442260    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:35:42.442260    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 02:35:44.840449    8616 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:35:44.840449    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:35:45.843574    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m03 ).state
	I0229 02:35:47.927132    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:35:47.927338    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:35:47.927338    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 02:35:50.323979    8616 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:35:50.323979    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:35:51.335686    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m03 ).state
	I0229 02:35:53.401720    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:35:53.401720    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:35:53.401977    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 02:35:55.794637    8616 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:35:55.794637    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:35:56.809862    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m03 ).state
	I0229 02:35:58.864980    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:35:58.864980    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:35:58.865322    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 02:36:01.222959    8616 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:36:01.222959    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:36:02.223510    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m03 ).state
	I0229 02:36:04.321668    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:36:04.321668    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:36:04.321759    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 02:36:06.744565    8616 main.go:141] libmachine: [stdout =====>] : 172.19.1.210
	
	I0229 02:36:06.744565    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:36:06.746890    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m03 ).state
	I0229 02:36:08.760109    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:36:08.760547    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:36:08.760627    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 02:36:11.183582    8616 main.go:141] libmachine: [stdout =====>] : 172.19.1.210
	
	I0229 02:36:11.183582    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:36:11.184054    8616 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\config.json ...
	I0229 02:36:11.186016    8616 machine.go:88] provisioning docker machine ...
	I0229 02:36:11.186096    8616 buildroot.go:166] provisioning hostname "multinode-314500-m03"
	I0229 02:36:11.186267    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m03 ).state
	I0229 02:36:13.199533    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:36:13.199533    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:36:13.199604    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 02:36:15.609950    8616 main.go:141] libmachine: [stdout =====>] : 172.19.1.210
	
	I0229 02:36:15.609950    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:36:15.614256    8616 main.go:141] libmachine: Using SSH client type: native
	I0229 02:36:15.614718    8616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.1.210 22 <nil> <nil>}
	I0229 02:36:15.614718    8616 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-314500-m03 && echo "multinode-314500-m03" | sudo tee /etc/hostname
	I0229 02:36:15.782961    8616 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-314500-m03
	
	I0229 02:36:15.782961    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m03 ).state
	I0229 02:36:17.766575    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:36:17.766575    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:36:17.766575    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 02:36:20.160131    8616 main.go:141] libmachine: [stdout =====>] : 172.19.1.210
	
	I0229 02:36:20.160131    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:36:20.164353    8616 main.go:141] libmachine: Using SSH client type: native
	I0229 02:36:20.164949    8616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.1.210 22 <nil> <nil>}
	I0229 02:36:20.164949    8616 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-314500-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-314500-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-314500-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:36:20.322587    8616 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:36:20.322647    8616 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0229 02:36:20.322647    8616 buildroot.go:174] setting up certificates
	I0229 02:36:20.322647    8616 provision.go:83] configureAuth start
	I0229 02:36:20.322647    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m03 ).state
	I0229 02:36:22.321226    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:36:22.321226    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:36:22.321226    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 02:36:24.736612    8616 main.go:141] libmachine: [stdout =====>] : 172.19.1.210
	
	I0229 02:36:24.737266    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:36:24.737266    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m03 ).state
	I0229 02:36:26.724196    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:36:26.724196    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:36:26.724196    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 02:36:29.142164    8616 main.go:141] libmachine: [stdout =====>] : 172.19.1.210
	
	I0229 02:36:29.142164    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:36:29.142237    8616 provision.go:138] copyHostCerts
	I0229 02:36:29.142326    8616 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0229 02:36:29.142459    8616 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0229 02:36:29.142459    8616 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0229 02:36:29.142459    8616 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0229 02:36:29.143751    8616 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0229 02:36:29.143751    8616 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0229 02:36:29.143751    8616 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0229 02:36:29.143751    8616 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0229 02:36:29.144932    8616 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0229 02:36:29.145097    8616 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0229 02:36:29.145141    8616 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0229 02:36:29.145334    8616 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1675 bytes)
	I0229 02:36:29.146122    8616 provision.go:112] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-314500-m03 san=[172.19.1.210 172.19.1.210 localhost 127.0.0.1 minikube multinode-314500-m03]
	I0229 02:36:29.222744    8616 provision.go:172] copyRemoteCerts
	I0229 02:36:29.231836    8616 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:36:29.231938    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m03 ).state
	I0229 02:36:31.219722    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:36:31.219722    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:36:31.219982    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 02:36:33.634393    8616 main.go:141] libmachine: [stdout =====>] : 172.19.1.210
	
	I0229 02:36:33.634393    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:36:33.634892    8616 sshutil.go:53] new ssh client: &{IP:172.19.1.210 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m03\id_rsa Username:docker}
	I0229 02:36:33.757456    8616 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5251222s)
	I0229 02:36:33.757456    8616 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0229 02:36:33.757456    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 02:36:33.812394    8616 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0229 02:36:33.813323    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1237 bytes)
	I0229 02:36:33.860949    8616 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0229 02:36:33.861581    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 02:36:33.909786    8616 provision.go:86] duration metric: configureAuth took 13.5863834s
	I0229 02:36:33.909786    8616 buildroot.go:189] setting minikube options for container-runtime
	I0229 02:36:33.910603    8616 config.go:182] Loaded profile config "multinode-314500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 02:36:33.910696    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m03 ).state
	I0229 02:36:35.924404    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:36:35.924582    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:36:35.924685    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 02:36:38.354914    8616 main.go:141] libmachine: [stdout =====>] : 172.19.1.210
	
	I0229 02:36:38.355267    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:36:38.359447    8616 main.go:141] libmachine: Using SSH client type: native
	I0229 02:36:38.359916    8616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.1.210 22 <nil> <nil>}
	I0229 02:36:38.359916    8616 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 02:36:38.510374    8616 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0229 02:36:38.510438    8616 buildroot.go:70] root file system type: tmpfs
	I0229 02:36:38.510568    8616 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 02:36:38.510647    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m03 ).state
	I0229 02:36:40.507659    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:36:40.507659    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:36:40.507659    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 02:36:42.890847    8616 main.go:141] libmachine: [stdout =====>] : 172.19.1.210
	
	I0229 02:36:42.890847    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:36:42.897457    8616 main.go:141] libmachine: Using SSH client type: native
	I0229 02:36:42.898004    8616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.1.210 22 <nil> <nil>}
	I0229 02:36:42.898103    8616 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.2.252"
	Environment="NO_PROXY=172.19.2.252,172.19.4.42"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 02:36:43.079535    8616 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.2.252
	Environment=NO_PROXY=172.19.2.252,172.19.4.42
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 02:36:43.079535    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m03 ).state
	I0229 02:36:45.037895    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:36:45.037895    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:36:45.037987    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 02:36:47.470515    8616 main.go:141] libmachine: [stdout =====>] : 172.19.1.210
	
	I0229 02:36:47.471596    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:36:47.479643    8616 main.go:141] libmachine: Using SSH client type: native
	I0229 02:36:47.479799    8616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.1.210 22 <nil> <nil>}
	I0229 02:36:47.479799    8616 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 02:36:48.667551    8616 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0229 02:36:48.667551    8616 machine.go:91] provisioned docker machine in 37.4793696s
	I0229 02:36:48.667551    8616 start.go:300] post-start starting for "multinode-314500-m03" (driver="hyperv")
	I0229 02:36:48.667551    8616 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:36:48.677177    8616 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:36:48.677177    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m03 ).state
	I0229 02:36:50.702665    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:36:50.702665    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:36:50.703553    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 02:36:53.120805    8616 main.go:141] libmachine: [stdout =====>] : 172.19.1.210
	
	I0229 02:36:53.120979    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:36:53.121332    8616 sshutil.go:53] new ssh client: &{IP:172.19.1.210 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m03\id_rsa Username:docker}
	I0229 02:36:53.229105    8616 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5516755s)
	I0229 02:36:53.241619    8616 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:36:53.248529    8616 command_runner.go:130] > NAME=Buildroot
	I0229 02:36:53.248702    8616 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0229 02:36:53.248702    8616 command_runner.go:130] > ID=buildroot
	I0229 02:36:53.248702    8616 command_runner.go:130] > VERSION_ID=2023.02.9
	I0229 02:36:53.248702    8616 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0229 02:36:53.248702    8616 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 02:36:53.248702    8616 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0229 02:36:53.248702    8616 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0229 02:36:53.249362    8616 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem -> 33122.pem in /etc/ssl/certs
	I0229 02:36:53.249362    8616 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem -> /etc/ssl/certs/33122.pem
	I0229 02:36:53.259742    8616 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 02:36:53.279482    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem --> /etc/ssl/certs/33122.pem (1708 bytes)
	I0229 02:36:53.327582    8616 start.go:303] post-start completed in 4.6597718s
	I0229 02:36:53.327582    8616 fix.go:56] fixHost completed within 1m17.7889128s
	I0229 02:36:53.327582    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m03 ).state
	I0229 02:36:55.348083    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:36:55.348360    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:36:55.348478    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 02:36:57.735516    8616 main.go:141] libmachine: [stdout =====>] : 172.19.1.210
	
	I0229 02:36:57.735516    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:36:57.741514    8616 main.go:141] libmachine: Using SSH client type: native
	I0229 02:36:57.742011    8616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.1.210 22 <nil> <nil>}
	I0229 02:36:57.742011    8616 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 02:36:57.882549    8616 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709174218.045124130
	
	I0229 02:36:57.882549    8616 fix.go:206] guest clock: 1709174218.045124130
	I0229 02:36:57.882549    8616 fix.go:219] Guest: 2024-02-29 02:36:58.04512413 +0000 UTC Remote: 2024-02-29 02:36:53.3275821 +0000 UTC m=+346.801495301 (delta=4.71754203s)
	I0229 02:36:57.882549    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m03 ).state
	I0229 02:36:59.910417    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:36:59.910417    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:36:59.910631    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 02:37:02.344014    8616 main.go:141] libmachine: [stdout =====>] : 172.19.1.210
	
	I0229 02:37:02.344852    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:37:02.349113    8616 main.go:141] libmachine: Using SSH client type: native
	I0229 02:37:02.349336    8616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.1.210 22 <nil> <nil>}
	I0229 02:37:02.349336    8616 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709174217
	I0229 02:37:02.510966    8616 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Feb 29 02:36:57 UTC 2024
	
	I0229 02:37:02.510966    8616 fix.go:226] clock set: Thu Feb 29 02:36:57 UTC 2024
	 (err=<nil>)
	I0229 02:37:02.511069    8616 start.go:83] releasing machines lock for "multinode-314500-m03", held for 1m26.9720088s
	I0229 02:37:02.511152    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m03 ).state
	I0229 02:37:04.515458    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:37:04.515458    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:37:04.515458    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 02:37:06.929779    8616 main.go:141] libmachine: [stdout =====>] : 172.19.1.210
	
	I0229 02:37:06.930198    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:37:06.931080    8616 out.go:177] * Found network options:
	I0229 02:37:06.931662    8616 out.go:177]   - NO_PROXY=172.19.2.252,172.19.4.42
	W0229 02:37:06.932207    8616 proxy.go:119] fail to check proxy env: Error ip not in block
	W0229 02:37:06.932276    8616 proxy.go:119] fail to check proxy env: Error ip not in block
	I0229 02:37:06.932923    8616 out.go:177]   - NO_PROXY=172.19.2.252,172.19.4.42
	W0229 02:37:06.933490    8616 proxy.go:119] fail to check proxy env: Error ip not in block
	W0229 02:37:06.933549    8616 proxy.go:119] fail to check proxy env: Error ip not in block
	W0229 02:37:06.934693    8616 proxy.go:119] fail to check proxy env: Error ip not in block
	W0229 02:37:06.934693    8616 proxy.go:119] fail to check proxy env: Error ip not in block
	I0229 02:37:06.937076    8616 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:37:06.937337    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m03 ).state
	I0229 02:37:06.953360    8616 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0229 02:37:06.953360    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m03 ).state
	I0229 02:37:08.994727    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:37:08.994727    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:37:08.994821    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 02:37:08.995722    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:37:08.995722    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:37:08.996041    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 02:37:11.493334    8616 main.go:141] libmachine: [stdout =====>] : 172.19.1.210
	
	I0229 02:37:11.493413    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:37:11.493835    8616 sshutil.go:53] new ssh client: &{IP:172.19.1.210 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m03\id_rsa Username:docker}
	I0229 02:37:11.518321    8616 main.go:141] libmachine: [stdout =====>] : 172.19.1.210
	
	I0229 02:37:11.519239    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:37:11.519623    8616 sshutil.go:53] new ssh client: &{IP:172.19.1.210 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m03\id_rsa Username:docker}
	I0229 02:37:11.602579    8616 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0229 02:37:11.603026    8616 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.6494071s)
	W0229 02:37:11.603026    8616 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 02:37:11.611785    8616 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:37:11.698899    8616 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0229 02:37:11.698899    8616 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.7615585s)
	I0229 02:37:11.699024    8616 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0229 02:37:11.699111    8616 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 02:37:11.699111    8616 start.go:475] detecting cgroup driver to use...
	I0229 02:37:11.699238    8616 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:37:11.733188    8616 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0229 02:37:11.746742    8616 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0229 02:37:11.776641    8616 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 02:37:11.798450    8616 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 02:37:11.809332    8616 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 02:37:11.838625    8616 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 02:37:11.870012    8616 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 02:37:11.907143    8616 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 02:37:11.936513    8616 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:37:11.972332    8616 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 02:37:12.003804    8616 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:37:12.021852    8616 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0229 02:37:12.033934    8616 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:37:12.061388    8616 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:37:12.260929    8616 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 02:37:12.295423    8616 start.go:475] detecting cgroup driver to use...
	I0229 02:37:12.305316    8616 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 02:37:12.328798    8616 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0229 02:37:12.328798    8616 command_runner.go:130] > [Unit]
	I0229 02:37:12.328798    8616 command_runner.go:130] > Description=Docker Application Container Engine
	I0229 02:37:12.328798    8616 command_runner.go:130] > Documentation=https://docs.docker.com
	I0229 02:37:12.328798    8616 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0229 02:37:12.328798    8616 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0229 02:37:12.328798    8616 command_runner.go:130] > StartLimitBurst=3
	I0229 02:37:12.328798    8616 command_runner.go:130] > StartLimitIntervalSec=60
	I0229 02:37:12.328798    8616 command_runner.go:130] > [Service]
	I0229 02:37:12.328798    8616 command_runner.go:130] > Type=notify
	I0229 02:37:12.328798    8616 command_runner.go:130] > Restart=on-failure
	I0229 02:37:12.328798    8616 command_runner.go:130] > Environment=NO_PROXY=172.19.2.252
	I0229 02:37:12.329333    8616 command_runner.go:130] > Environment=NO_PROXY=172.19.2.252,172.19.4.42
	I0229 02:37:12.329333    8616 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0229 02:37:12.329333    8616 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0229 02:37:12.329333    8616 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0229 02:37:12.329333    8616 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0229 02:37:12.329422    8616 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0229 02:37:12.329422    8616 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0229 02:37:12.329459    8616 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0229 02:37:12.329459    8616 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0229 02:37:12.329505    8616 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0229 02:37:12.329505    8616 command_runner.go:130] > ExecStart=
	I0229 02:37:12.329542    8616 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0229 02:37:12.329568    8616 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0229 02:37:12.329601    8616 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0229 02:37:12.329636    8616 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0229 02:37:12.329636    8616 command_runner.go:130] > LimitNOFILE=infinity
	I0229 02:37:12.329636    8616 command_runner.go:130] > LimitNPROC=infinity
	I0229 02:37:12.329636    8616 command_runner.go:130] > LimitCORE=infinity
	I0229 02:37:12.329636    8616 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0229 02:37:12.329674    8616 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0229 02:37:12.329674    8616 command_runner.go:130] > TasksMax=infinity
	I0229 02:37:12.329674    8616 command_runner.go:130] > TimeoutStartSec=0
	I0229 02:37:12.329674    8616 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0229 02:37:12.329674    8616 command_runner.go:130] > Delegate=yes
	I0229 02:37:12.329674    8616 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0229 02:37:12.329674    8616 command_runner.go:130] > KillMode=process
	I0229 02:37:12.329674    8616 command_runner.go:130] > [Install]
	I0229 02:37:12.329674    8616 command_runner.go:130] > WantedBy=multi-user.target
	I0229 02:37:12.339479    8616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:37:12.369542    8616 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 02:37:12.405132    8616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:37:12.440480    8616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 02:37:12.473964    8616 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 02:37:12.528878    8616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 02:37:12.554098    8616 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:37:12.588258    8616 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0229 02:37:12.597320    8616 ssh_runner.go:195] Run: which cri-dockerd
	I0229 02:37:12.603355    8616 command_runner.go:130] > /usr/bin/cri-dockerd
	I0229 02:37:12.614923    8616 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 02:37:12.633514    8616 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0229 02:37:12.674396    8616 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 02:37:12.870094    8616 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 02:37:13.052646    8616 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 02:37:13.052766    8616 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 02:37:13.102593    8616 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:37:13.302201    8616 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 02:37:14.845160    8616 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5419894s)
	I0229 02:37:14.854145    8616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0229 02:37:14.889888    8616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 02:37:14.924245    8616 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0229 02:37:15.131682    8616 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0229 02:37:15.337131    8616 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:37:15.538627    8616 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0229 02:37:15.578870    8616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 02:37:15.613953    8616 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:37:15.820789    8616 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0229 02:37:15.923662    8616 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0229 02:37:15.933493    8616 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0229 02:37:15.943823    8616 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0229 02:37:15.943951    8616 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0229 02:37:15.944026    8616 command_runner.go:130] > Device: 0,22	Inode: 856         Links: 1
	I0229 02:37:15.944070    8616 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0229 02:37:15.944070    8616 command_runner.go:130] > Access: 2024-02-29 02:37:16.008084906 +0000
	I0229 02:37:15.944102    8616 command_runner.go:130] > Modify: 2024-02-29 02:37:16.008084906 +0000
	I0229 02:37:15.944102    8616 command_runner.go:130] > Change: 2024-02-29 02:37:16.011084894 +0000
	I0229 02:37:15.944102    8616 command_runner.go:130] >  Birth: -
	I0229 02:37:15.944155    8616 start.go:543] Will wait 60s for crictl version
	I0229 02:37:15.955186    8616 ssh_runner.go:195] Run: which crictl
	I0229 02:37:15.961135    8616 command_runner.go:130] > /usr/bin/crictl
	I0229 02:37:15.970733    8616 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 02:37:16.050585    8616 command_runner.go:130] > Version:  0.1.0
	I0229 02:37:16.050585    8616 command_runner.go:130] > RuntimeName:  docker
	I0229 02:37:16.050585    8616 command_runner.go:130] > RuntimeVersion:  24.0.7
	I0229 02:37:16.050585    8616 command_runner.go:130] > RuntimeApiVersion:  v1
	I0229 02:37:16.051629    8616 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0229 02:37:16.060289    8616 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 02:37:16.098102    8616 command_runner.go:130] > 24.0.7
	I0229 02:37:16.106199    8616 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 02:37:16.140872    8616 command_runner.go:130] > 24.0.7
	I0229 02:37:16.143924    8616 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0229 02:37:16.144121    8616 out.go:177]   - env NO_PROXY=172.19.2.252
	I0229 02:37:16.144820    8616 out.go:177]   - env NO_PROXY=172.19.2.252,172.19.4.42
	I0229 02:37:16.145241    8616 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0229 02:37:16.149649    8616 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0229 02:37:16.149649    8616 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0229 02:37:16.149649    8616 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0229 02:37:16.149649    8616 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:a6:a3:c1 Flags:up|broadcast|multicast|running}
	I0229 02:37:16.152338    8616 ip.go:210] interface addr: fe80::fc78:4865:5cac:d448/64
	I0229 02:37:16.152338    8616 ip.go:210] interface addr: 172.19.0.1/20
	I0229 02:37:16.161064    8616 ssh_runner.go:195] Run: grep 172.19.0.1	host.minikube.internal$ /etc/hosts
	I0229 02:37:16.169148    8616 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:37:16.193743    8616 certs.go:56] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500 for IP: 172.19.1.210
	I0229 02:37:16.193743    8616 certs.go:190] acquiring lock for shared ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:37:16.194438    8616 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0229 02:37:16.194712    8616 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0229 02:37:16.195033    8616 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0229 02:37:16.195300    8616 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0229 02:37:16.195522    8616 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0229 02:37:16.195634    8616 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0229 02:37:16.195990    8616 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3312.pem (1338 bytes)
	W0229 02:37:16.196265    8616 certs.go:433] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3312_empty.pem, impossibly tiny 0 bytes
	I0229 02:37:16.196338    8616 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0229 02:37:16.196556    8616 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0229 02:37:16.196751    8616 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0229 02:37:16.197002    8616 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0229 02:37:16.197335    8616 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem (1708 bytes)
	I0229 02:37:16.197479    8616 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3312.pem -> /usr/share/ca-certificates/3312.pem
	I0229 02:37:16.197589    8616 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem -> /usr/share/ca-certificates/33122.pem
	I0229 02:37:16.197687    8616 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:37:16.203094    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 02:37:16.255406    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 02:37:16.306586    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 02:37:16.355317    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 02:37:16.403035    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3312.pem --> /usr/share/ca-certificates/3312.pem (1338 bytes)
	I0229 02:37:16.447616    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem --> /usr/share/ca-certificates/33122.pem (1708 bytes)
	I0229 02:37:16.493089    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 02:37:16.546313    8616 ssh_runner.go:195] Run: openssl version
	I0229 02:37:16.556372    8616 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0229 02:37:16.566293    8616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3312.pem && ln -fs /usr/share/ca-certificates/3312.pem /etc/ssl/certs/3312.pem"
	I0229 02:37:16.594888    8616 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3312.pem
	I0229 02:37:16.602583    8616 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 29 00:59 /usr/share/ca-certificates/3312.pem
	I0229 02:37:16.602646    8616 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 00:59 /usr/share/ca-certificates/3312.pem
	I0229 02:37:16.613402    8616 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3312.pem
	I0229 02:37:16.627644    8616 command_runner.go:130] > 51391683
	I0229 02:37:16.636913    8616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3312.pem /etc/ssl/certs/51391683.0"
	I0229 02:37:16.665865    8616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/33122.pem && ln -fs /usr/share/ca-certificates/33122.pem /etc/ssl/certs/33122.pem"
	I0229 02:37:16.696698    8616 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/33122.pem
	I0229 02:37:16.703359    8616 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 29 00:59 /usr/share/ca-certificates/33122.pem
	I0229 02:37:16.703359    8616 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 00:59 /usr/share/ca-certificates/33122.pem
	I0229 02:37:16.714376    8616 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/33122.pem
	I0229 02:37:16.722392    8616 command_runner.go:130] > 3ec20f2e
	I0229 02:37:16.732041    8616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/33122.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 02:37:16.762225    8616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 02:37:16.791943    8616 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:37:16.799068    8616 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 29 00:45 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:37:16.799153    8616 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 00:45 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:37:16.809727    8616 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:37:16.818392    8616 command_runner.go:130] > b5213941
	I0229 02:37:16.826741    8616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 02:37:16.858432    8616 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 02:37:16.866789    8616 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 02:37:16.866846    8616 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 02:37:16.874427    8616 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0229 02:37:16.910929    8616 command_runner.go:130] > cgroupfs
	I0229 02:37:16.912435    8616 cni.go:84] Creating CNI manager for ""
	I0229 02:37:16.912496    8616 cni.go:136] 3 nodes found, recommending kindnet
	I0229 02:37:16.912537    8616 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 02:37:16.912580    8616 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.1.210 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-314500 NodeName:multinode-314500-m03 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.2.252"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.1.210 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 02:37:16.912720    8616 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.1.210
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-314500-m03"
	  kubeletExtraArgs:
	    node-ip: 172.19.1.210
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.2.252"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 02:37:16.912720    8616 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-314500-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.1.210
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-314500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 02:37:16.922745    8616 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 02:37:16.941800    8616 command_runner.go:130] > kubeadm
	I0229 02:37:16.941800    8616 command_runner.go:130] > kubectl
	I0229 02:37:16.941800    8616 command_runner.go:130] > kubelet
	I0229 02:37:16.941800    8616 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 02:37:16.950983    8616 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0229 02:37:16.970338    8616 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I0229 02:37:17.004116    8616 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 02:37:17.042737    8616 ssh_runner.go:195] Run: grep 172.19.2.252	control-plane.minikube.internal$ /etc/hosts
	I0229 02:37:17.049513    8616 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.2.252	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:37:17.070773    8616 host.go:66] Checking if "multinode-314500" exists ...
	I0229 02:37:17.070773    8616 config.go:182] Loaded profile config "multinode-314500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 02:37:17.070773    8616 start.go:304] JoinCluster: &{Name:multinode-314500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-314500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.19.2.252 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.4.42 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.19.1.210 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false in
gress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:37:17.070773    8616 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0229 02:37:17.070773    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:37:19.037209    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:37:19.038004    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:37:19.038004    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:37:21.491392    8616 main.go:141] libmachine: [stdout =====>] : 172.19.2.252
	
	I0229 02:37:21.491392    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:37:21.492024    8616 sshutil.go:53] new ssh client: &{IP:172.19.2.252 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\id_rsa Username:docker}
	I0229 02:37:21.691912    8616 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token agodxd.ly3m3bobk4q19ff2 --discovery-token-ca-cert-hash sha256:9c722bf1323b6c4442b9327af3863f0d7e41785d89e27c3b473d4929b028e022 
	I0229 02:37:21.692904    8616 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (4.6218389s)
	I0229 02:37:21.693003    8616 start.go:317] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:172.19.1.210 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0229 02:37:21.693075    8616 host.go:66] Checking if "multinode-314500" exists ...
	I0229 02:37:21.703013    8616 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-314500-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0229 02:37:21.703013    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:37:23.743553    8616 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:37:23.743626    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:37:23.743690    8616 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:37:26.181503    8616 main.go:141] libmachine: [stdout =====>] : 172.19.2.252
	
	I0229 02:37:26.181503    8616 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:37:26.182709    8616 sshutil.go:53] new ssh client: &{IP:172.19.2.252 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\id_rsa Username:docker}
	I0229 02:37:26.356598    8616 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0229 02:37:26.412620    8616 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-7g9t8, kube-system/kube-proxy-zvlt2
	I0229 02:37:26.414696    8616 command_runner.go:130] > node/multinode-314500-m03 cordoned
	I0229 02:37:26.414696    8616 command_runner.go:130] > node/multinode-314500-m03 drained
	I0229 02:37:26.414802    8616 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-314500-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (4.7115272s)
	I0229 02:37:26.414877    8616 node.go:108] successfully drained node "m03"
	I0229 02:37:26.415873    8616 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 02:37:26.416376    8616 kapi.go:59] client config for multinode-314500: &rest.Config{Host:"https://172.19.2.252:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2480600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 02:37:26.417190    8616 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0229 02:37:26.417190    8616 round_trippers.go:463] DELETE https://172.19.2.252:8443/api/v1/nodes/multinode-314500-m03
	I0229 02:37:26.417190    8616 round_trippers.go:469] Request Headers:
	I0229 02:37:26.417190    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:37:26.417190    8616 round_trippers.go:473]     Content-Type: application/json
	I0229 02:37:26.417190    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:37:26.435784    8616 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0229 02:37:26.435784    8616 round_trippers.go:577] Response Headers:
	I0229 02:37:26.435784    8616 round_trippers.go:580]     Content-Length: 171
	I0229 02:37:26.436176    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:37:26 GMT
	I0229 02:37:26.436176    8616 round_trippers.go:580]     Audit-Id: 4d8c005f-9b6c-491f-99de-1fd350fef659
	I0229 02:37:26.436176    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:37:26.436176    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:37:26.436176    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:37:26.436176    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:37:26.436312    8616 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-314500-m03","kind":"nodes","uid":"e3855f89-f53a-45b3-8e99-79bb2f21bdb0"}}
	I0229 02:37:26.436429    8616 node.go:124] successfully deleted node "m03"
	I0229 02:37:26.436429    8616 start.go:321] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:172.19.1.210 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0229 02:37:26.436509    8616 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:172.19.1.210 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0229 02:37:26.436509    8616 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token agodxd.ly3m3bobk4q19ff2 --discovery-token-ca-cert-hash sha256:9c722bf1323b6c4442b9327af3863f0d7e41785d89e27c3b473d4929b028e022 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-314500-m03"
	I0229 02:37:26.668664    8616 command_runner.go:130] ! W0229 02:37:26.830847    1343 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0229 02:37:27.133589    8616 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:37:28.984119    8616 command_runner.go:130] > [preflight] Running pre-flight checks
	I0229 02:37:28.984813    8616 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0229 02:37:28.984871    8616 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0229 02:37:28.984871    8616 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:37:28.984871    8616 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:37:28.984871    8616 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0229 02:37:28.984871    8616 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0229 02:37:28.984871    8616 command_runner.go:130] > This node has joined the cluster:
	I0229 02:37:28.984871    8616 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0229 02:37:28.984871    8616 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0229 02:37:28.984871    8616 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0229 02:37:28.985193    8616 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token agodxd.ly3m3bobk4q19ff2 --discovery-token-ca-cert-hash sha256:9c722bf1323b6c4442b9327af3863f0d7e41785d89e27c3b473d4929b028e022 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-314500-m03": (2.5485425s)
	I0229 02:37:28.985193    8616 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0229 02:37:29.276898    8616 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0229 02:37:29.535390    8616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61 minikube.k8s.io/name=multinode-314500 minikube.k8s.io/updated_at=2024_02_29T02_37_29_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:37:29.690165    8616 command_runner.go:130] > node/multinode-314500-m02 labeled
	I0229 02:37:29.690212    8616 command_runner.go:130] > node/multinode-314500-m03 labeled
	I0229 02:37:29.690212    8616 start.go:306] JoinCluster complete in 12.6187373s
	I0229 02:37:29.690420    8616 cni.go:84] Creating CNI manager for ""
	I0229 02:37:29.690481    8616 cni.go:136] 3 nodes found, recommending kindnet
	I0229 02:37:29.702184    8616 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0229 02:37:29.711415    8616 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0229 02:37:29.711538    8616 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0229 02:37:29.711538    8616 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0229 02:37:29.711538    8616 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0229 02:37:29.711608    8616 command_runner.go:130] > Access: 2024-02-29 02:31:42.605077900 +0000
	I0229 02:37:29.711608    8616 command_runner.go:130] > Modify: 2024-02-23 03:39:37.000000000 +0000
	I0229 02:37:29.711674    8616 command_runner.go:130] > Change: 2024-02-29 02:31:30.415000000 +0000
	I0229 02:37:29.711674    8616 command_runner.go:130] >  Birth: -
	I0229 02:37:29.711811    8616 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0229 02:37:29.711904    8616 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0229 02:37:29.757884    8616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0229 02:37:30.154008    8616 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0229 02:37:30.154008    8616 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0229 02:37:30.154008    8616 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0229 02:37:30.154008    8616 command_runner.go:130] > daemonset.apps/kindnet configured
	I0229 02:37:30.155131    8616 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 02:37:30.155666    8616 kapi.go:59] client config for multinode-314500: &rest.Config{Host:"https://172.19.2.252:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2480600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 02:37:30.156121    8616 round_trippers.go:463] GET https://172.19.2.252:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0229 02:37:30.156121    8616 round_trippers.go:469] Request Headers:
	I0229 02:37:30.156121    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:37:30.156121    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:37:30.159718    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:37:30.160320    8616 round_trippers.go:577] Response Headers:
	I0229 02:37:30.160320    8616 round_trippers.go:580]     Content-Length: 292
	I0229 02:37:30.160320    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:37:30 GMT
	I0229 02:37:30.160320    8616 round_trippers.go:580]     Audit-Id: 2eea80c9-e493-4e44-9b4f-f23a066dd315
	I0229 02:37:30.160320    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:37:30.160440    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:37:30.160440    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:37:30.160440    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:37:30.160440    8616 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b4cd7015-a823-43da-bf82-ae91c5678262","resourceVersion":"1439","creationTimestamp":"2024-02-29T02:15:51Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0229 02:37:30.160440    8616 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-314500" context rescaled to 1 replicas
	I0229 02:37:30.160440    8616 start.go:223] Will wait 6m0s for node &{Name:m03 IP:172.19.1.210 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0229 02:37:30.161198    8616 out.go:177] * Verifying Kubernetes components...
	I0229 02:37:30.176960    8616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:37:30.204017    8616 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 02:37:30.204953    8616 kapi.go:59] client config for multinode-314500: &rest.Config{Host:"https://172.19.2.252:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2480600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 02:37:30.206052    8616 node_ready.go:35] waiting up to 6m0s for node "multinode-314500-m03" to be "Ready" ...
	I0229 02:37:30.206216    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500-m03
	I0229 02:37:30.206279    8616 round_trippers.go:469] Request Headers:
	I0229 02:37:30.206339    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:37:30.206339    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:37:30.209612    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:37:30.210413    8616 round_trippers.go:577] Response Headers:
	I0229 02:37:30.210459    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:37:30.210459    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:37:30.210459    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:37:30.210459    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:37:30.210459    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:37:30 GMT
	I0229 02:37:30.210459    8616 round_trippers.go:580]     Audit-Id: 5e0a197e-1bf3-4539-9d59-98431818f693
	I0229 02:37:30.210591    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m03","uid":"08d2740d-ef9b-44a3-98a4-4df97a2f4f14","resourceVersion":"1764","creationTimestamp":"2024-02-29T02:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3611 chars]
	I0229 02:37:30.716583    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500-m03
	I0229 02:37:30.716583    8616 round_trippers.go:469] Request Headers:
	I0229 02:37:30.716583    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:37:30.716583    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:37:30.720903    8616 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:37:30.720903    8616 round_trippers.go:577] Response Headers:
	I0229 02:37:30.720903    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:37:30 GMT
	I0229 02:37:30.720903    8616 round_trippers.go:580]     Audit-Id: 19c6728b-11a6-47a1-9f59-e6a167385f31
	I0229 02:37:30.720903    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:37:30.720903    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:37:30.720903    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:37:30.720903    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:37:30.720903    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m03","uid":"08d2740d-ef9b-44a3-98a4-4df97a2f4f14","resourceVersion":"1769","creationTimestamp":"2024-02-29T02:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3720 chars]
	I0229 02:37:31.207722    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500-m03
	I0229 02:37:31.207722    8616 round_trippers.go:469] Request Headers:
	I0229 02:37:31.207912    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:37:31.207912    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:37:31.214921    8616 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 02:37:31.214921    8616 round_trippers.go:577] Response Headers:
	I0229 02:37:31.214921    8616 round_trippers.go:580]     Audit-Id: 37bb46bb-d71c-4324-807c-298a63de38fc
	I0229 02:37:31.214921    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:37:31.214921    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:37:31.214921    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:37:31.214921    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:37:31.214921    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:37:31 GMT
	I0229 02:37:31.214921    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m03","uid":"08d2740d-ef9b-44a3-98a4-4df97a2f4f14","resourceVersion":"1769","creationTimestamp":"2024-02-29T02:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3720 chars]
	I0229 02:37:31.710199    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500-m03
	I0229 02:37:31.710412    8616 round_trippers.go:469] Request Headers:
	I0229 02:37:31.710412    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:37:31.710412    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:37:31.717319    8616 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 02:37:31.717319    8616 round_trippers.go:577] Response Headers:
	I0229 02:37:31.717381    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:37:31.717381    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:37:31.717407    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:37:31 GMT
	I0229 02:37:31.717407    8616 round_trippers.go:580]     Audit-Id: 57e33159-6977-4b4d-a72a-44b1a7880053
	I0229 02:37:31.717407    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:37:31.717407    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:37:31.717934    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m03","uid":"08d2740d-ef9b-44a3-98a4-4df97a2f4f14","resourceVersion":"1769","creationTimestamp":"2024-02-29T02:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3720 chars]
	I0229 02:37:32.214219    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500-m03
	I0229 02:37:32.214307    8616 round_trippers.go:469] Request Headers:
	I0229 02:37:32.214307    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:37:32.214307    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:37:32.219301    8616 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:37:32.219301    8616 round_trippers.go:577] Response Headers:
	I0229 02:37:32.219301    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:37:32.219301    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:37:32.219301    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:37:32.219301    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:37:32 GMT
	I0229 02:37:32.219301    8616 round_trippers.go:580]     Audit-Id: 05b1f40c-dae9-430a-8274-527f43f421af
	I0229 02:37:32.219301    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:37:32.220141    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m03","uid":"08d2740d-ef9b-44a3-98a4-4df97a2f4f14","resourceVersion":"1769","creationTimestamp":"2024-02-29T02:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3720 chars]
	I0229 02:37:32.220141    8616 node_ready.go:58] node "multinode-314500-m03" has status "Ready":"False"
	I0229 02:37:32.710975    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500-m03
	I0229 02:37:32.710975    8616 round_trippers.go:469] Request Headers:
	I0229 02:37:32.710975    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:37:32.710975    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:37:32.718149    8616 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 02:37:32.718733    8616 round_trippers.go:577] Response Headers:
	I0229 02:37:32.718787    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:37:32.718787    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:37:32.718787    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:37:32.718787    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:37:32.718787    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:37:32 GMT
	I0229 02:37:32.718787    8616 round_trippers.go:580]     Audit-Id: 85f6c95a-1fdb-4297-bce0-92528b5d64de
	I0229 02:37:32.718787    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m03","uid":"08d2740d-ef9b-44a3-98a4-4df97a2f4f14","resourceVersion":"1769","creationTimestamp":"2024-02-29T02:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3720 chars]
	I0229 02:37:33.217131    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500-m03
	I0229 02:37:33.217233    8616 round_trippers.go:469] Request Headers:
	I0229 02:37:33.217233    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:37:33.217233    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:37:33.221563    8616 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:37:33.221795    8616 round_trippers.go:577] Response Headers:
	I0229 02:37:33.221795    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:37:33.221795    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:37:33.221795    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:37:33.221795    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:37:33.221795    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:37:33 GMT
	I0229 02:37:33.221795    8616 round_trippers.go:580]     Audit-Id: a01d1cfb-45b1-406f-adeb-31f7454aae50
	I0229 02:37:33.221986    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m03","uid":"08d2740d-ef9b-44a3-98a4-4df97a2f4f14","resourceVersion":"1769","creationTimestamp":"2024-02-29T02:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3720 chars]
	I0229 02:37:33.716696    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500-m03
	I0229 02:37:33.716696    8616 round_trippers.go:469] Request Headers:
	I0229 02:37:33.716696    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:37:33.716696    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:37:33.720552    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:37:33.720552    8616 round_trippers.go:577] Response Headers:
	I0229 02:37:33.720552    8616 round_trippers.go:580]     Audit-Id: 4b184856-40ed-4a47-a7d9-ea28e8026253
	I0229 02:37:33.720552    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:37:33.720552    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:37:33.720552    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:37:33.720882    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:37:33.720882    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:37:33 GMT
	I0229 02:37:33.721178    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m03","uid":"08d2740d-ef9b-44a3-98a4-4df97a2f4f14","resourceVersion":"1769","creationTimestamp":"2024-02-29T02:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3720 chars]
	I0229 02:37:34.221566    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500-m03
	I0229 02:37:34.221628    8616 round_trippers.go:469] Request Headers:
	I0229 02:37:34.221628    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:37:34.221628    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:37:34.228837    8616 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 02:37:34.228886    8616 round_trippers.go:577] Response Headers:
	I0229 02:37:34.228886    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:37:34.228943    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:37:34.228943    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:37:34.228943    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:37:34 GMT
	I0229 02:37:34.228943    8616 round_trippers.go:580]     Audit-Id: 1168926b-eaae-40e8-aec8-d5060c68d51a
	I0229 02:37:34.228943    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:37:34.229029    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m03","uid":"08d2740d-ef9b-44a3-98a4-4df97a2f4f14","resourceVersion":"1769","creationTimestamp":"2024-02-29T02:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3720 chars]
	I0229 02:37:34.229029    8616 node_ready.go:58] node "multinode-314500-m03" has status "Ready":"False"
	I0229 02:37:34.720812    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500-m03
	I0229 02:37:34.720812    8616 round_trippers.go:469] Request Headers:
	I0229 02:37:34.720812    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:37:34.720812    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:37:34.725002    8616 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:37:34.725002    8616 round_trippers.go:577] Response Headers:
	I0229 02:37:34.725517    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:37:34.725517    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:37:34 GMT
	I0229 02:37:34.725517    8616 round_trippers.go:580]     Audit-Id: d0135ca7-ebb0-4e1d-9661-c822a594086e
	I0229 02:37:34.725517    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:37:34.725517    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:37:34.725517    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:37:34.725682    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m03","uid":"08d2740d-ef9b-44a3-98a4-4df97a2f4f14","resourceVersion":"1769","creationTimestamp":"2024-02-29T02:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3720 chars]
	I0229 02:37:35.210351    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500-m03
	I0229 02:37:35.210351    8616 round_trippers.go:469] Request Headers:
	I0229 02:37:35.210351    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:37:35.210351    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:37:35.216040    8616 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 02:37:35.216211    8616 round_trippers.go:577] Response Headers:
	I0229 02:37:35.216211    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:37:35.216211    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:37:35.216211    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:37:35 GMT
	I0229 02:37:35.216211    8616 round_trippers.go:580]     Audit-Id: 6263802a-6681-4856-a107-92b63fc03519
	I0229 02:37:35.216211    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:37:35.216211    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:37:35.216430    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m03","uid":"08d2740d-ef9b-44a3-98a4-4df97a2f4f14","resourceVersion":"1769","creationTimestamp":"2024-02-29T02:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3720 chars]
	I0229 02:37:35.713539    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500-m03
	I0229 02:37:35.713605    8616 round_trippers.go:469] Request Headers:
	I0229 02:37:35.713667    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:37:35.713667    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:37:35.717158    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:37:35.717619    8616 round_trippers.go:577] Response Headers:
	I0229 02:37:35.717619    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:37:35.717619    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:37:35.717619    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:37:35 GMT
	I0229 02:37:35.717619    8616 round_trippers.go:580]     Audit-Id: e119fb3c-6560-4c3b-8c13-dc78fc1c6a21
	I0229 02:37:35.717619    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:37:35.717619    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:37:35.717839    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m03","uid":"08d2740d-ef9b-44a3-98a4-4df97a2f4f14","resourceVersion":"1769","creationTimestamp":"2024-02-29T02:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3720 chars]
	I0229 02:37:36.213024    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500-m03
	I0229 02:37:36.213126    8616 round_trippers.go:469] Request Headers:
	I0229 02:37:36.213126    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:37:36.213126    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:37:36.217430    8616 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:37:36.217589    8616 round_trippers.go:577] Response Headers:
	I0229 02:37:36.217589    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:37:36.217589    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:37:36.217589    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:37:36.217589    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:37:36.217589    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:37:36 GMT
	I0229 02:37:36.217589    8616 round_trippers.go:580]     Audit-Id: 4ed76e84-c2cc-475f-a033-2cc93b7a77ae
	I0229 02:37:36.217761    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m03","uid":"08d2740d-ef9b-44a3-98a4-4df97a2f4f14","resourceVersion":"1779","creationTimestamp":"2024-02-29T02:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3755 chars]
	I0229 02:37:36.218289    8616 node_ready.go:49] node "multinode-314500-m03" has status "Ready":"True"
	I0229 02:37:36.218289    8616 node_ready.go:38] duration metric: took 6.0118465s waiting for node "multinode-314500-m03" to be "Ready" ...
	I0229 02:37:36.218289    8616 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:37:36.218436    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods
	I0229 02:37:36.218436    8616 round_trippers.go:469] Request Headers:
	I0229 02:37:36.218436    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:37:36.218515    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:37:36.223929    8616 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 02:37:36.223929    8616 round_trippers.go:577] Response Headers:
	I0229 02:37:36.223929    8616 round_trippers.go:580]     Audit-Id: 9c785a14-8485-4caf-9781-6d29220795f9
	I0229 02:37:36.223929    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:37:36.223929    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:37:36.223929    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:37:36.223929    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:37:36.223929    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:37:36 GMT
	I0229 02:37:36.230942    8616 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1779"},"items":[{"metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1435","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82863 chars]
	I0229 02:37:36.234456    8616 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace to be "Ready" ...
	I0229 02:37:36.234579    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:37:36.234579    8616 round_trippers.go:469] Request Headers:
	I0229 02:37:36.234579    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:37:36.234579    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:37:36.238168    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:37:36.238328    8616 round_trippers.go:577] Response Headers:
	I0229 02:37:36.238328    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:37:36.238328    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:37:36 GMT
	I0229 02:37:36.238328    8616 round_trippers.go:580]     Audit-Id: ba75a6cd-d890-4d3d-9b76-0468787469dd
	I0229 02:37:36.238328    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:37:36.238328    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:37:36.238328    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:37:36.238580    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1435","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6492 chars]
	I0229 02:37:36.239178    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:37:36.239178    8616 round_trippers.go:469] Request Headers:
	I0229 02:37:36.239178    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:37:36.239178    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:37:36.241913    8616 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:37:36.242493    8616 round_trippers.go:577] Response Headers:
	I0229 02:37:36.242493    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:37:36.242493    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:37:36.242493    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:37:36 GMT
	I0229 02:37:36.242493    8616 round_trippers.go:580]     Audit-Id: 6afd467b-4555-408d-b208-0ae49325eef1
	I0229 02:37:36.242493    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:37:36.242493    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:37:36.242874    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:37:36.243483    8616 pod_ready.go:92] pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace has status "Ready":"True"
	I0229 02:37:36.243483    8616 pod_ready.go:81] duration metric: took 8.9978ms waiting for pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace to be "Ready" ...
	I0229 02:37:36.243579    8616 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:37:36.243579    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-314500
	I0229 02:37:36.243683    8616 round_trippers.go:469] Request Headers:
	I0229 02:37:36.243683    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:37:36.243683    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:37:36.247053    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:37:36.247053    8616 round_trippers.go:577] Response Headers:
	I0229 02:37:36.247053    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:37:36 GMT
	I0229 02:37:36.247053    8616 round_trippers.go:580]     Audit-Id: eacb1371-1589-49a3-b9cd-b533c1bd7d4f
	I0229 02:37:36.247053    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:37:36.247053    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:37:36.247053    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:37:36.247053    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:37:36.248101    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-314500","namespace":"kube-system","uid":"b4f5f225-c7b2-4d26-a0ad-f09b2045ea14","resourceVersion":"1409","creationTimestamp":"2024-02-29T02:33:03Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.2.252:2379","kubernetes.io/config.hash":"b583592d76a92080553678603be807ce","kubernetes.io/config.mirror":"b583592d76a92080553678603be807ce","kubernetes.io/config.seen":"2024-02-29T02:32:57.667230131Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:33:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5853 chars]
	I0229 02:37:36.248655    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:37:36.248745    8616 round_trippers.go:469] Request Headers:
	I0229 02:37:36.248745    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:37:36.248745    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:37:36.250700    8616 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 02:37:36.251546    8616 round_trippers.go:577] Response Headers:
	I0229 02:37:36.251546    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:37:36.251546    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:37:36.251546    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:37:36.251546    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:37:36.251546    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:37:36 GMT
	I0229 02:37:36.251629    8616 round_trippers.go:580]     Audit-Id: 6ab770f0-4df7-4ad1-bec1-f45698b5d74a
	I0229 02:37:36.251933    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:37:36.252346    8616 pod_ready.go:92] pod "etcd-multinode-314500" in "kube-system" namespace has status "Ready":"True"
	I0229 02:37:36.252404    8616 pod_ready.go:81] duration metric: took 8.7673ms waiting for pod "etcd-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:37:36.252404    8616 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:37:36.252517    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-314500
	I0229 02:37:36.252517    8616 round_trippers.go:469] Request Headers:
	I0229 02:37:36.252578    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:37:36.252578    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:37:36.255885    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:37:36.255885    8616 round_trippers.go:577] Response Headers:
	I0229 02:37:36.255885    8616 round_trippers.go:580]     Audit-Id: 34232499-d3c3-438d-b67a-8466c9a04306
	I0229 02:37:36.255885    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:37:36.255885    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:37:36.255885    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:37:36.255885    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:37:36.255885    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:37:36 GMT
	I0229 02:37:36.256749    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-314500","namespace":"kube-system","uid":"d64133c2-8b75-4b12-b270-cbd060c1374e","resourceVersion":"1408","creationTimestamp":"2024-02-29T02:33:04Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.2.252:8443","kubernetes.io/config.hash":"462233dfd1884b55b9575973e0f20340","kubernetes.io/config.mirror":"462233dfd1884b55b9575973e0f20340","kubernetes.io/config.seen":"2024-02-29T02:32:57.667231431Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:33:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7391 chars]
	I0229 02:37:36.257329    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:37:36.257392    8616 round_trippers.go:469] Request Headers:
	I0229 02:37:36.257392    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:37:36.257392    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:37:36.259760    8616 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:37:36.259760    8616 round_trippers.go:577] Response Headers:
	I0229 02:37:36.259760    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:37:36.259760    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:37:36.259760    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:37:36 GMT
	I0229 02:37:36.259760    8616 round_trippers.go:580]     Audit-Id: efb7e6c8-6cd8-40ae-a04d-368e81d20d06
	I0229 02:37:36.259760    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:37:36.259760    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:37:36.260335    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:37:36.260686    8616 pod_ready.go:92] pod "kube-apiserver-multinode-314500" in "kube-system" namespace has status "Ready":"True"
	I0229 02:37:36.260762    8616 pod_ready.go:81] duration metric: took 8.3575ms waiting for pod "kube-apiserver-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:37:36.260762    8616 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:37:36.260835    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-314500
	I0229 02:37:36.260910    8616 round_trippers.go:469] Request Headers:
	I0229 02:37:36.260910    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:37:36.260943    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:37:36.264071    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:37:36.264071    8616 round_trippers.go:577] Response Headers:
	I0229 02:37:36.264071    8616 round_trippers.go:580]     Audit-Id: 2458d772-d530-4193-a7d8-d755185e531a
	I0229 02:37:36.264071    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:37:36.264071    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:37:36.264071    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:37:36.264071    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:37:36.264071    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:37:36 GMT
	I0229 02:37:36.265027    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-314500","namespace":"kube-system","uid":"58e57902-e113-44a9-b5b5-4aba2ba13491","resourceVersion":"1426","creationTimestamp":"2024-02-29T02:15:52Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"46f4a0cce9ca64e19c1ad09d6f30ce1e","kubernetes.io/config.mirror":"46f4a0cce9ca64e19c1ad09d6f30ce1e","kubernetes.io/config.seen":"2024-02-29T02:15:52.221398986Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:15:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7171 chars]
	I0229 02:37:36.265579    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:37:36.265579    8616 round_trippers.go:469] Request Headers:
	I0229 02:37:36.265579    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:37:36.265639    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:37:36.267561    8616 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 02:37:36.268203    8616 round_trippers.go:577] Response Headers:
	I0229 02:37:36.268203    8616 round_trippers.go:580]     Audit-Id: b4aefeae-b1eb-43b2-a671-ab33d4b37ecb
	I0229 02:37:36.268203    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:37:36.268203    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:37:36.268203    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:37:36.268203    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:37:36.268203    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:37:36 GMT
	I0229 02:37:36.268330    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:37:36.268330    8616 pod_ready.go:92] pod "kube-controller-manager-multinode-314500" in "kube-system" namespace has status "Ready":"True"
	I0229 02:37:36.268330    8616 pod_ready.go:81] duration metric: took 7.568ms waiting for pod "kube-controller-manager-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:37:36.268330    8616 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4gbrl" in "kube-system" namespace to be "Ready" ...
	I0229 02:37:36.418352    8616 request.go:629] Waited for 150.0128ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4gbrl
	I0229 02:37:36.418568    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4gbrl
	I0229 02:37:36.418568    8616 round_trippers.go:469] Request Headers:
	I0229 02:37:36.418568    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:37:36.418568    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:37:36.422270    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:37:36.422270    8616 round_trippers.go:577] Response Headers:
	I0229 02:37:36.422270    8616 round_trippers.go:580]     Audit-Id: cbe60fe0-0692-4d2b-8179-2863be8b2111
	I0229 02:37:36.422270    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:37:36.422270    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:37:36.422270    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:37:36.422270    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:37:36.422270    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:37:36 GMT
	I0229 02:37:36.423458    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4gbrl","generateName":"kube-proxy-","namespace":"kube-system","uid":"accb56cb-79ee-4f16-b05e-91bf554c4a60","resourceVersion":"1598","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"99934fe5-0d72-4e83-8f59-4a0b59969008","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"99934fe5-0d72-4e83-8f59-4a0b59969008\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5739 chars]
	I0229 02:37:36.620828    8616 request.go:629] Waited for 196.5567ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.252:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:37:36.620828    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:37:36.620828    8616 round_trippers.go:469] Request Headers:
	I0229 02:37:36.620828    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:37:36.620828    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:37:36.625645    8616 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:37:36.626518    8616 round_trippers.go:577] Response Headers:
	I0229 02:37:36.626518    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:37:36.626518    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:37:36.626518    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:37:36.626518    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:37:36 GMT
	I0229 02:37:36.626518    8616 round_trippers.go:580]     Audit-Id: 52a2546a-d529-42b5-824f-69791f027f45
	I0229 02:37:36.626518    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:37:36.626680    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"2332789d-7280-427a-9644-fc1ffcfc737d","resourceVersion":"1763","creationTimestamp":"2024-02-29T02:35:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:35:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3803 chars]
	I0229 02:37:36.626680    8616 pod_ready.go:92] pod "kube-proxy-4gbrl" in "kube-system" namespace has status "Ready":"True"
	I0229 02:37:36.626680    8616 pod_ready.go:81] duration metric: took 358.3296ms waiting for pod "kube-proxy-4gbrl" in "kube-system" namespace to be "Ready" ...
	I0229 02:37:36.626680    8616 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6r6j4" in "kube-system" namespace to be "Ready" ...
	I0229 02:37:36.825939    8616 request.go:629] Waited for 198.4564ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6r6j4
	I0229 02:37:36.825939    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6r6j4
	I0229 02:37:36.826352    8616 round_trippers.go:469] Request Headers:
	I0229 02:37:36.826423    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:37:36.826423    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:37:36.832332    8616 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 02:37:36.832472    8616 round_trippers.go:577] Response Headers:
	I0229 02:37:36.832472    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:37:36.832472    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:37:36.832472    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:37:36 GMT
	I0229 02:37:36.832472    8616 round_trippers.go:580]     Audit-Id: 3036324e-1171-465d-84f7-cc592993d823
	I0229 02:37:36.832472    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:37:36.832472    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:37:36.832819    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6r6j4","generateName":"kube-proxy-","namespace":"kube-system","uid":"2b84b22d-3786-4f9e-a23a-c7cfc93bb671","resourceVersion":"1324","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"99934fe5-0d72-4e83-8f59-4a0b59969008","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"99934fe5-0d72-4e83-8f59-4a0b59969008\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5735 chars]
	I0229 02:37:37.029389    8616 request.go:629] Waited for 195.2867ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:37:37.029562    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:37:37.029562    8616 round_trippers.go:469] Request Headers:
	I0229 02:37:37.029562    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:37:37.029562    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:37:37.034163    8616 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:37:37.034627    8616 round_trippers.go:577] Response Headers:
	I0229 02:37:37.034702    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:37:37.034702    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:37:37 GMT
	I0229 02:37:37.034702    8616 round_trippers.go:580]     Audit-Id: 8edd80d7-bd15-4e24-9f3b-d81900f5d93c
	I0229 02:37:37.034702    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:37:37.034702    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:37:37.034702    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:37:37.034702    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:37:37.035577    8616 pod_ready.go:92] pod "kube-proxy-6r6j4" in "kube-system" namespace has status "Ready":"True"
	I0229 02:37:37.035577    8616 pod_ready.go:81] duration metric: took 408.8745ms waiting for pod "kube-proxy-6r6j4" in "kube-system" namespace to be "Ready" ...
	I0229 02:37:37.035577    8616 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zvlt2" in "kube-system" namespace to be "Ready" ...
	I0229 02:37:37.217863    8616 request.go:629] Waited for 182.0162ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zvlt2
	I0229 02:37:37.217863    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zvlt2
	I0229 02:37:37.217863    8616 round_trippers.go:469] Request Headers:
	I0229 02:37:37.217863    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:37:37.217863    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:37:37.222537    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:37:37.222568    8616 round_trippers.go:577] Response Headers:
	I0229 02:37:37.222611    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:37:37.222611    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:37:37.222611    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:37:37 GMT
	I0229 02:37:37.222611    8616 round_trippers.go:580]     Audit-Id: 305b6850-6e7e-4a6b-9187-7e7da8c32f80
	I0229 02:37:37.222611    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:37:37.222611    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:37:37.222611    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-zvlt2","generateName":"kube-proxy-","namespace":"kube-system","uid":"0f29dabe-dc06-4460-bf19-55470247dbcc","resourceVersion":"1765","creationTimestamp":"2024-02-29T02:28:50Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"99934fe5-0d72-4e83-8f59-4a0b59969008","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:28:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"99934fe5-0d72-4e83-8f59-4a0b59969008\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5743 chars]
	I0229 02:37:37.420716    8616 request.go:629] Waited for 197.1202ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.252:8443/api/v1/nodes/multinode-314500-m03
	I0229 02:37:37.420916    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500-m03
	I0229 02:37:37.421064    8616 round_trippers.go:469] Request Headers:
	I0229 02:37:37.421064    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:37:37.421064    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:37:37.424373    8616 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:37:37.425370    8616 round_trippers.go:577] Response Headers:
	I0229 02:37:37.425370    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:37:37.425370    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:37:37 GMT
	I0229 02:37:37.425370    8616 round_trippers.go:580]     Audit-Id: 225ae0f8-ab71-4e46-aff1-6aa8360b18ed
	I0229 02:37:37.425370    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:37:37.425370    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:37:37.425370    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:37:37.425897    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m03","uid":"08d2740d-ef9b-44a3-98a4-4df97a2f4f14","resourceVersion":"1779","creationTimestamp":"2024-02-29T02:37:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:37:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3755 chars]
	I0229 02:37:37.426369    8616 pod_ready.go:92] pod "kube-proxy-zvlt2" in "kube-system" namespace has status "Ready":"True"
	I0229 02:37:37.426369    8616 pod_ready.go:81] duration metric: took 390.7704ms waiting for pod "kube-proxy-zvlt2" in "kube-system" namespace to be "Ready" ...
	I0229 02:37:37.426431    8616 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:37:37.625549    8616 request.go:629] Waited for 198.899ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-314500
	I0229 02:37:37.625644    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-314500
	I0229 02:37:37.625644    8616 round_trippers.go:469] Request Headers:
	I0229 02:37:37.625644    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:37:37.625644    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:37:37.630144    8616 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:37:37.630803    8616 round_trippers.go:577] Response Headers:
	I0229 02:37:37.630803    8616 round_trippers.go:580]     Audit-Id: 353d0637-b37c-4fc9-ab6c-9fa772a01608
	I0229 02:37:37.630803    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:37:37.630803    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:37:37.630803    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:37:37.630803    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:37:37.630853    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:37:37 GMT
	I0229 02:37:37.630996    8616 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-314500","namespace":"kube-system","uid":"31fcecc6-17de-43a6-892d-37cd915de64b","resourceVersion":"1428","creationTimestamp":"2024-02-29T02:15:52Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"3d9a79ff068a0922524863a8caa5053a","kubernetes.io/config.mirror":"3d9a79ff068a0922524863a8caa5053a","kubernetes.io/config.seen":"2024-02-29T02:15:52.221399886Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:15:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4901 chars]
	I0229 02:37:37.813632    8616 request.go:629] Waited for 181.889ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:37:37.813952    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes/multinode-314500
	I0229 02:37:37.813952    8616 round_trippers.go:469] Request Headers:
	I0229 02:37:37.813952    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:37:37.813952    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:37:37.820376    8616 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 02:37:37.820376    8616 round_trippers.go:577] Response Headers:
	I0229 02:37:37.820376    8616 round_trippers.go:580]     Audit-Id: 0e735e20-961d-4175-b6d6-cd9b44d61124
	I0229 02:37:37.820376    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:37:37.820376    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:37:37.820376    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:37:37.820376    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:37:37.820376    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:37:37 GMT
	I0229 02:37:37.821775    8616 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:37:37.821866    8616 pod_ready.go:92] pod "kube-scheduler-multinode-314500" in "kube-system" namespace has status "Ready":"True"
	I0229 02:37:37.821866    8616 pod_ready.go:81] duration metric: took 395.4137ms waiting for pod "kube-scheduler-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:37:37.821866    8616 pod_ready.go:38] duration metric: took 1.603488s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:37:37.821866    8616 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 02:37:37.832162    8616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:37:37.875545    8616 system_svc.go:56] duration metric: took 53.611ms WaitForService to wait for kubelet.
	I0229 02:37:37.875629    8616 kubeadm.go:581] duration metric: took 7.7147598s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 02:37:37.875679    8616 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:37:38.016840    8616 request.go:629] Waited for 140.8195ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.252:8443/api/v1/nodes
	I0229 02:37:38.016840    8616 round_trippers.go:463] GET https://172.19.2.252:8443/api/v1/nodes
	I0229 02:37:38.016840    8616 round_trippers.go:469] Request Headers:
	I0229 02:37:38.016840    8616 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:37:38.017083    8616 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:37:38.022338    8616 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 02:37:38.022338    8616 round_trippers.go:577] Response Headers:
	I0229 02:37:38.022338    8616 round_trippers.go:580]     Audit-Id: e5b1d9f8-c966-4d4b-82ac-f4d1ca1ee539
	I0229 02:37:38.022338    8616 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:37:38.022338    8616 round_trippers.go:580]     Content-Type: application/json
	I0229 02:37:38.022338    8616 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:37:38.022338    8616 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:37:38.022338    8616 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:37:38 GMT
	I0229 02:37:38.023758    8616 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1781"},"items":[{"metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1398","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 14832 chars]
	I0229 02:37:38.025057    8616 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:37:38.025057    8616 node_conditions.go:123] node cpu capacity is 2
	I0229 02:37:38.025057    8616 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:37:38.025057    8616 node_conditions.go:123] node cpu capacity is 2
	I0229 02:37:38.025057    8616 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:37:38.025057    8616 node_conditions.go:123] node cpu capacity is 2
	I0229 02:37:38.025057    8616 node_conditions.go:105] duration metric: took 149.3691ms to run NodePressure ...
	I0229 02:37:38.025057    8616 start.go:228] waiting for startup goroutines ...
	I0229 02:37:38.025196    8616 start.go:242] writing updated cluster config ...
	I0229 02:37:38.034639    8616 ssh_runner.go:195] Run: rm -f paused
	I0229 02:37:38.172632    8616 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 02:37:38.173799    8616 out.go:177] * Done! kubectl is now configured to use "multinode-314500" cluster and "default" namespace by default
	
	
	==> Docker <==
	Feb 29 02:33:10 multinode-314500 dockerd[1022]: time="2024-02-29T02:33:10.746638881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 02:33:10 multinode-314500 dockerd[1022]: time="2024-02-29T02:33:10.746660278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 02:33:10 multinode-314500 dockerd[1022]: time="2024-02-29T02:33:10.746925438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 02:33:10 multinode-314500 dockerd[1022]: time="2024-02-29T02:33:10.751944891Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 02:33:10 multinode-314500 dockerd[1022]: time="2024-02-29T02:33:10.752223749Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 02:33:10 multinode-314500 dockerd[1022]: time="2024-02-29T02:33:10.752360529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 02:33:10 multinode-314500 dockerd[1022]: time="2024-02-29T02:33:10.752817461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 02:33:10 multinode-314500 cri-dockerd[1214]: time="2024-02-29T02:33:10Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e767eb47350176a4126f26d5a53c4cde916eda3c9aaca1b2c177211fd7e3d7a1/resolv.conf as [nameserver 172.19.0.1]"
	Feb 29 02:33:11 multinode-314500 cri-dockerd[1214]: time="2024-02-29T02:33:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/509440c9783b98dcde8e6a25f47259d07f0e80281940b6ce332afdf0ecbb7dac/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Feb 29 02:33:11 multinode-314500 dockerd[1022]: time="2024-02-29T02:33:11.147269411Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 02:33:11 multinode-314500 dockerd[1022]: time="2024-02-29T02:33:11.147494977Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 02:33:11 multinode-314500 dockerd[1022]: time="2024-02-29T02:33:11.147577865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 02:33:11 multinode-314500 dockerd[1022]: time="2024-02-29T02:33:11.148117684Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 02:33:11 multinode-314500 dockerd[1022]: time="2024-02-29T02:33:11.211752507Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 02:33:11 multinode-314500 dockerd[1022]: time="2024-02-29T02:33:11.212051362Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 02:33:11 multinode-314500 dockerd[1022]: time="2024-02-29T02:33:11.212164345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 02:33:11 multinode-314500 dockerd[1022]: time="2024-02-29T02:33:11.213582234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 02:33:34 multinode-314500 dockerd[1016]: time="2024-02-29T02:33:34.073292536Z" level=info msg="ignoring event" container=b606f60fc884c37694fb102bd839a78cbb46c17dd91adb6a73d419c6a180d17b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 02:33:34 multinode-314500 dockerd[1022]: time="2024-02-29T02:33:34.074364107Z" level=info msg="shim disconnected" id=b606f60fc884c37694fb102bd839a78cbb46c17dd91adb6a73d419c6a180d17b namespace=moby
	Feb 29 02:33:34 multinode-314500 dockerd[1022]: time="2024-02-29T02:33:34.074499190Z" level=warning msg="cleaning up after shim disconnected" id=b606f60fc884c37694fb102bd839a78cbb46c17dd91adb6a73d419c6a180d17b namespace=moby
	Feb 29 02:33:34 multinode-314500 dockerd[1022]: time="2024-02-29T02:33:34.074515788Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 29 02:33:45 multinode-314500 dockerd[1022]: time="2024-02-29T02:33:45.872290457Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 02:33:45 multinode-314500 dockerd[1022]: time="2024-02-29T02:33:45.872466933Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 02:33:45 multinode-314500 dockerd[1022]: time="2024-02-29T02:33:45.872488030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 02:33:45 multinode-314500 dockerd[1022]: time="2024-02-29T02:33:45.873310419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	72b2d832587c8       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       2                   b37b7f8a0d78c       storage-provisioner
	745f9e18fc6ab       8c811b4aec35f                                                                                         4 minutes ago       Running             busybox                   1                   509440c9783b9       busybox-5b5d89c9d6-qcblm
	5814ae38cea0e       ead0a4a53df89                                                                                         4 minutes ago       Running             coredns                   1                   e767eb4735017       coredns-5dd5756b68-8g6tg
	1993ffe76ae7f       4950bb10b3f87                                                                                         4 minutes ago       Running             kindnet-cni               1                   349bdaee8eb96       kindnet-t9r77
	b606f60fc884c       6e38f40d628db                                                                                         4 minutes ago       Exited              storage-provisioner       1                   b37b7f8a0d78c       storage-provisioner
	341278d602ddd       83f6cc407eed8                                                                                         4 minutes ago       Running             kube-proxy                1                   02fbddb29c60a       kube-proxy-6r6j4
	ada445c976af3       73deb9a3f7025                                                                                         4 minutes ago       Running             etcd                      0                   9d23233978a7c       etcd-multinode-314500
	795e8c6845079       7fe0e6f37db33                                                                                         4 minutes ago       Running             kube-apiserver            0                   252fb20145ea7       kube-apiserver-multinode-314500
	f1cb36bcb3f3d       d058aa5ab969c                                                                                         4 minutes ago       Running             kube-controller-manager   1                   340bdcfacbe25       kube-controller-manager-multinode-314500
	41745010357fe       e3db313c6dbc0                                                                                         4 minutes ago       Running             kube-scheduler            1                   007d6c9a53e16       kube-scheduler-multinode-314500
	56fdd268ee231       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   18 minutes ago      Exited              busybox                   0                   ffe504a01e326       busybox-5b5d89c9d6-qcblm
	11c14ebdfaf67       ead0a4a53df89                                                                                         21 minutes ago      Exited              coredns                   0                   8c944d91b6250       coredns-5dd5756b68-8g6tg
	dd61788b0a0d8       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              21 minutes ago      Exited              kindnet-cni               0                   edb41bd5e75d4       kindnet-t9r77
	c93e331307466       83f6cc407eed8                                                                                         21 minutes ago      Exited              kube-proxy                0                   4b10f8bd940b8       kube-proxy-6r6j4
	ab0c4864aee58       e3db313c6dbc0                                                                                         22 minutes ago      Exited              kube-scheduler            0                   bf7b9750ae9ea       kube-scheduler-multinode-314500
	26b1ab05f99a9       d058aa5ab969c                                                                                         22 minutes ago      Exited              kube-controller-manager   0                   96810146c69cf       kube-controller-manager-multinode-314500
	
	
	==> coredns [11c14ebdfaf6] <==
	[INFO] 10.244.0.3:55803 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000074704s
	[INFO] 10.244.0.3:52953 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000063204s
	[INFO] 10.244.0.3:35356 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000217512s
	[INFO] 10.244.0.3:51868 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000073604s
	[INFO] 10.244.0.3:43420 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000103505s
	[INFO] 10.244.0.3:51899 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000210611s
	[INFO] 10.244.0.3:56850 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00018761s
	[INFO] 10.244.1.2:34482 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097705s
	[INFO] 10.244.1.2:36018 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000150108s
	[INFO] 10.244.1.2:50932 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064203s
	[INFO] 10.244.1.2:38051 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000129007s
	[INFO] 10.244.0.3:41360 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000316917s
	[INFO] 10.244.0.3:60778 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000160008s
	[INFO] 10.244.0.3:57010 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000133407s
	[INFO] 10.244.0.3:43292 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000127407s
	[INFO] 10.244.1.2:34858 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135708s
	[INFO] 10.244.1.2:60624 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000269714s
	[INFO] 10.244.1.2:46116 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000100405s
	[INFO] 10.244.1.2:57306 - 5 "PTR IN 1.0.19.172.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 100 0.000138608s
	[INFO] 10.244.0.3:57177 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000084804s
	[INFO] 10.244.0.3:55463 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000274415s
	[INFO] 10.244.0.3:36032 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000185809s
	[INFO] 10.244.0.3:42058 - 5 "PTR IN 1.0.19.172.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 100 0.000083604s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [5814ae38cea0] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c43704ee218c3500d97c54254d76c1d56cc0443961fea557ef898f1da8154a1212605c10203ede1e288070d97e67d107ee3d60ae9c1e40b060414629f7811dd
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:32846 - 7493 "HINFO IN 6477765139827559342.7079461035665089981. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.126040178s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               multinode-314500
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-314500
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61
	                    minikube.k8s.io/name=multinode-314500
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_29T02_15_53_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 02:15:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-314500
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Feb 2024 02:37:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 02:33:08 +0000   Thu, 29 Feb 2024 02:15:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 02:33:08 +0000   Thu, 29 Feb 2024 02:15:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 02:33:08 +0000   Thu, 29 Feb 2024 02:15:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Feb 2024 02:33:08 +0000   Thu, 29 Feb 2024 02:33:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.2.252
	  Hostname:    multinode-314500
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 7c79f53294df4a2e92d491506b3f6f45
	  System UUID:                d0919ea2-7b7b-e246-9348-925d639776b8
	  Boot ID:                    2410693e-1be2-4826-ad1f-0bd9db69db25
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-qcblm                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 coredns-5dd5756b68-8g6tg                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-multinode-314500                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m54s
	  kube-system                 kindnet-t9r77                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      21m
	  kube-system                 kube-apiserver-multinode-314500             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m53s
	  kube-system                 kube-controller-manager-multinode-314500    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-proxy-6r6j4                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-multinode-314500             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 4m53s              kube-proxy       
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node multinode-314500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node multinode-314500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node multinode-314500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node multinode-314500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node multinode-314500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     22m                kubelet          Node multinode-314500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           21m                node-controller  Node multinode-314500 event: Registered Node multinode-314500 in Controller
	  Normal  NodeReady                21m                kubelet          Node multinode-314500 status is now: NodeReady
	  Normal  Starting                 5m                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m (x8 over 5m)    kubelet          Node multinode-314500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m (x8 over 5m)    kubelet          Node multinode-314500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m (x7 over 5m)    kubelet          Node multinode-314500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m42s              node-controller  Node multinode-314500 event: Registered Node multinode-314500 in Controller
	
	
	Name:               multinode-314500-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-314500-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61
	                    minikube.k8s.io/name=multinode-314500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_02_29T02_37_29_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 02:35:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-314500-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Feb 2024 02:37:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 02:35:33 +0000   Thu, 29 Feb 2024 02:35:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 02:35:33 +0000   Thu, 29 Feb 2024 02:35:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 02:35:33 +0000   Thu, 29 Feb 2024 02:35:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Feb 2024 02:35:33 +0000   Thu, 29 Feb 2024 02:35:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.4.42
	  Hostname:    multinode-314500-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 e610aac6e4be42609bc76ef694a8facf
	  System UUID:                b1627b4d-7d75-ed47-9ee8-e9d67e74df72
	  Boot ID:                    a1e79ebd-9754-4a2d-a740-898f5164b060
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-vh2zk    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m34s
	  kube-system                 kindnet-6r7b8               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      19m
	  kube-system                 kube-proxy-4gbrl            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 18m                    kube-proxy  
	  Normal  Starting                 2m27s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  19m (x5 over 19m)      kubelet     Node multinode-314500-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x5 over 19m)      kubelet     Node multinode-314500-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x5 over 19m)      kubelet     Node multinode-314500-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                18m                    kubelet     Node multinode-314500-m02 status is now: NodeReady
	  Normal  Starting                 2m29s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m29s (x2 over 2m29s)  kubelet     Node multinode-314500-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m29s (x2 over 2m29s)  kubelet     Node multinode-314500-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m29s (x2 over 2m29s)  kubelet     Node multinode-314500-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m29s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m24s                  kubelet     Node multinode-314500-m02 status is now: NodeReady
	
	
	Name:               multinode-314500-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-314500-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61
	                    minikube.k8s.io/name=multinode-314500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_02_29T02_37_29_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 02:37:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-314500-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Feb 2024 02:37:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 02:37:36 +0000   Thu, 29 Feb 2024 02:37:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 02:37:36 +0000   Thu, 29 Feb 2024 02:37:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 02:37:36 +0000   Thu, 29 Feb 2024 02:37:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Feb 2024 02:37:36 +0000   Thu, 29 Feb 2024 02:37:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.1.210
	  Hostname:    multinode-314500-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 a295f7b8070840fe95996cede2949c7a
	  System UUID:                ecd784dd-4f45-f54c-9713-3ce44f4ba103
	  Boot ID:                    d4728df7-d934-47c9-8bff-536bd4495ce7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-7g9t8       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m7s
	  kube-system                 kube-proxy-zvlt2    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  Starting                 27s                  kube-proxy  
	  Normal  Starting                 8m58s                kube-proxy  
	  Normal  NodeHasSufficientMemory  9m7s (x2 over 9m7s)  kubelet     Node multinode-314500-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m7s (x2 over 9m7s)  kubelet     Node multinode-314500-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m7s (x2 over 9m7s)  kubelet     Node multinode-314500-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m7s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 9m7s                 kubelet     Starting kubelet.
	  Normal  NodeReady                8m50s                kubelet     Node multinode-314500-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  30s (x2 over 30s)    kubelet     Node multinode-314500-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s (x2 over 30s)    kubelet     Node multinode-314500-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s (x2 over 30s)    kubelet     Node multinode-314500-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  30s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 30s                  kubelet     Starting kubelet.
	  Normal  NodeReady                21s                  kubelet     Node multinode-314500-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.054871] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.022067] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	              * this clock source is slow. Consider trying other clock sources
	[  +5.948951] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.682407] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +2.001718] systemd-fstab-generator[113]: Ignoring "noauto" option for root device
	[  +7.882796] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Feb29 02:32] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.186112] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[ +24.365735] systemd-fstab-generator[943]: Ignoring "noauto" option for root device
	[  +0.102373] kauditd_printk_skb: 73 callbacks suppressed
	[  +0.495069] systemd-fstab-generator[982]: Ignoring "noauto" option for root device
	[  +0.216375] systemd-fstab-generator[994]: Ignoring "noauto" option for root device
	[  +0.230922] systemd-fstab-generator[1008]: Ignoring "noauto" option for root device
	[  +1.971993] systemd-fstab-generator[1167]: Ignoring "noauto" option for root device
	[  +0.204128] systemd-fstab-generator[1179]: Ignoring "noauto" option for root device
	[  +0.200187] systemd-fstab-generator[1191]: Ignoring "noauto" option for root device
	[  +0.271212] systemd-fstab-generator[1206]: Ignoring "noauto" option for root device
	[  +4.292960] systemd-fstab-generator[1425]: Ignoring "noauto" option for root device
	[  +0.113506] kauditd_printk_skb: 205 callbacks suppressed
	[Feb29 02:33] kauditd_printk_skb: 62 callbacks suppressed
	[  +7.515005] kauditd_printk_skb: 48 callbacks suppressed
	[ +13.133541] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [ada445c976af] <==
	{"level":"info","ts":"2024-02-29T02:32:59.387249Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-29T02:32:59.387381Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-29T02:32:59.388873Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"288caba846397842 switched to configuration voters=(2921898997477636162)"}
	{"level":"info","ts":"2024-02-29T02:32:59.389298Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b70ab9772a44d22c","local-member-id":"288caba846397842","added-peer-id":"288caba846397842","added-peer-peer-urls":["https://172.19.2.165:2380"]}
	{"level":"info","ts":"2024-02-29T02:32:59.389585Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b70ab9772a44d22c","local-member-id":"288caba846397842","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T02:32:59.389984Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T02:32:59.399568Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-02-29T02:32:59.400116Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"288caba846397842","initial-advertise-peer-urls":["https://172.19.2.252:2380"],"listen-peer-urls":["https://172.19.2.252:2380"],"advertise-client-urls":["https://172.19.2.252:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.19.2.252:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-02-29T02:32:59.400319Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-29T02:32:59.400677Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.19.2.252:2380"}
	{"level":"info","ts":"2024-02-29T02:32:59.400977Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.19.2.252:2380"}
	{"level":"info","ts":"2024-02-29T02:33:00.854914Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"288caba846397842 is starting a new election at term 2"}
	{"level":"info","ts":"2024-02-29T02:33:00.85528Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"288caba846397842 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-02-29T02:33:00.855401Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"288caba846397842 received MsgPreVoteResp from 288caba846397842 at term 2"}
	{"level":"info","ts":"2024-02-29T02:33:00.855421Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"288caba846397842 became candidate at term 3"}
	{"level":"info","ts":"2024-02-29T02:33:00.855429Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"288caba846397842 received MsgVoteResp from 288caba846397842 at term 3"}
	{"level":"info","ts":"2024-02-29T02:33:00.85544Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"288caba846397842 became leader at term 3"}
	{"level":"info","ts":"2024-02-29T02:33:00.85545Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 288caba846397842 elected leader 288caba846397842 at term 3"}
	{"level":"info","ts":"2024-02-29T02:33:00.857271Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"288caba846397842","local-member-attributes":"{Name:multinode-314500 ClientURLs:[https://172.19.2.252:2379]}","request-path":"/0/members/288caba846397842/attributes","cluster-id":"b70ab9772a44d22c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-29T02:33:00.85742Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T02:33:00.858003Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T02:33:00.859279Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.19.2.252:2379"}
	{"level":"info","ts":"2024-02-29T02:33:00.85934Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-29T02:33:00.859425Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-29T02:33:00.862313Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 02:37:58 up 6 min,  0 users,  load average: 0.15, 0.23, 0.12
	Linux multinode-314500 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [1993ffe76ae7] <==
	I0229 02:37:25.302521       1 main.go:227] handling current node
	I0229 02:37:25.302539       1 main.go:223] Handling node with IPs: map[172.19.4.42:{}]
	I0229 02:37:25.302552       1 main.go:250] Node multinode-314500-m02 has CIDR [10.244.1.0/24] 
	I0229 02:37:25.303065       1 main.go:223] Handling node with IPs: map[172.19.5.92:{}]
	I0229 02:37:25.303101       1 main.go:250] Node multinode-314500-m03 has CIDR [10.244.2.0/24] 
	I0229 02:37:35.317774       1 main.go:223] Handling node with IPs: map[172.19.2.252:{}]
	I0229 02:37:35.317908       1 main.go:227] handling current node
	I0229 02:37:35.317922       1 main.go:223] Handling node with IPs: map[172.19.4.42:{}]
	I0229 02:37:35.317930       1 main.go:250] Node multinode-314500-m02 has CIDR [10.244.1.0/24] 
	I0229 02:37:35.318385       1 main.go:223] Handling node with IPs: map[172.19.1.210:{}]
	I0229 02:37:35.318492       1 main.go:250] Node multinode-314500-m03 has CIDR [10.244.2.0/24] 
	I0229 02:37:35.318566       1 routes.go:54] Removing invalid route {Ifindex: 3 Dst: 10.244.2.0/24 Src: <nil> Gw: 172.19.5.92 Flags: [] Table: 254}
	I0229 02:37:35.318998       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 172.19.1.210 Flags: [] Table: 0} 
	I0229 02:37:45.329800       1 main.go:223] Handling node with IPs: map[172.19.2.252:{}]
	I0229 02:37:45.329957       1 main.go:227] handling current node
	I0229 02:37:45.329972       1 main.go:223] Handling node with IPs: map[172.19.4.42:{}]
	I0229 02:37:45.329980       1 main.go:250] Node multinode-314500-m02 has CIDR [10.244.1.0/24] 
	I0229 02:37:45.330104       1 main.go:223] Handling node with IPs: map[172.19.1.210:{}]
	I0229 02:37:45.330115       1 main.go:250] Node multinode-314500-m03 has CIDR [10.244.2.0/24] 
	I0229 02:37:55.342752       1 main.go:223] Handling node with IPs: map[172.19.2.252:{}]
	I0229 02:37:55.342791       1 main.go:227] handling current node
	I0229 02:37:55.342819       1 main.go:223] Handling node with IPs: map[172.19.4.42:{}]
	I0229 02:37:55.342827       1 main.go:250] Node multinode-314500-m02 has CIDR [10.244.1.0/24] 
	I0229 02:37:55.343221       1 main.go:223] Handling node with IPs: map[172.19.1.210:{}]
	I0229 02:37:55.343318       1 main.go:250] Node multinode-314500-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [dd61788b0a0d] <==
	I0229 02:29:13.157428       1 main.go:250] Node multinode-314500-m03 has CIDR [10.244.2.0/24] 
	I0229 02:29:23.171390       1 main.go:223] Handling node with IPs: map[172.19.2.165:{}]
	I0229 02:29:23.171506       1 main.go:227] handling current node
	I0229 02:29:23.171521       1 main.go:223] Handling node with IPs: map[172.19.5.202:{}]
	I0229 02:29:23.171558       1 main.go:250] Node multinode-314500-m02 has CIDR [10.244.1.0/24] 
	I0229 02:29:23.171988       1 main.go:223] Handling node with IPs: map[172.19.5.92:{}]
	I0229 02:29:23.172114       1 main.go:250] Node multinode-314500-m03 has CIDR [10.244.2.0/24] 
	I0229 02:29:33.187888       1 main.go:223] Handling node with IPs: map[172.19.2.165:{}]
	I0229 02:29:33.187986       1 main.go:227] handling current node
	I0229 02:29:33.188000       1 main.go:223] Handling node with IPs: map[172.19.5.202:{}]
	I0229 02:29:33.188008       1 main.go:250] Node multinode-314500-m02 has CIDR [10.244.1.0/24] 
	I0229 02:29:33.188359       1 main.go:223] Handling node with IPs: map[172.19.5.92:{}]
	I0229 02:29:33.188390       1 main.go:250] Node multinode-314500-m03 has CIDR [10.244.2.0/24] 
	I0229 02:29:43.205886       1 main.go:223] Handling node with IPs: map[172.19.2.165:{}]
	I0229 02:29:43.205915       1 main.go:227] handling current node
	I0229 02:29:43.205926       1 main.go:223] Handling node with IPs: map[172.19.5.202:{}]
	I0229 02:29:43.205933       1 main.go:250] Node multinode-314500-m02 has CIDR [10.244.1.0/24] 
	I0229 02:29:43.206048       1 main.go:223] Handling node with IPs: map[172.19.5.92:{}]
	I0229 02:29:43.206055       1 main.go:250] Node multinode-314500-m03 has CIDR [10.244.2.0/24] 
	I0229 02:29:53.218026       1 main.go:223] Handling node with IPs: map[172.19.2.165:{}]
	I0229 02:29:53.218828       1 main.go:227] handling current node
	I0229 02:29:53.218913       1 main.go:223] Handling node with IPs: map[172.19.5.202:{}]
	I0229 02:29:53.219132       1 main.go:250] Node multinode-314500-m02 has CIDR [10.244.1.0/24] 
	I0229 02:29:53.219374       1 main.go:223] Handling node with IPs: map[172.19.5.92:{}]
	I0229 02:29:53.219425       1 main.go:250] Node multinode-314500-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [795e8c684507] <==
	I0229 02:33:02.462800       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0229 02:33:02.462818       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0229 02:33:02.673439       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0229 02:33:02.675147       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0229 02:33:02.676466       1 aggregator.go:166] initial CRD sync complete...
	I0229 02:33:02.676579       1 autoregister_controller.go:141] Starting autoregister controller
	I0229 02:33:02.676587       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0229 02:33:02.676594       1 cache.go:39] Caches are synced for autoregister controller
	I0229 02:33:02.725791       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0229 02:33:02.747986       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0229 02:33:02.749101       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0229 02:33:02.749573       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0229 02:33:02.749584       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0229 02:33:02.753211       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0229 02:33:02.763937       1 shared_informer.go:318] Caches are synced for configmaps
	I0229 02:33:03.455276       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0229 02:33:03.914832       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.19.2.165 172.19.2.252]
	I0229 02:33:03.918748       1 controller.go:624] quota admission added evaluator for: endpoints
	I0229 02:33:03.929591       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0229 02:33:05.900498       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0229 02:33:06.127972       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0229 02:33:06.139527       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0229 02:33:06.232170       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0229 02:33:06.241807       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0229 02:33:23.914616       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.19.2.252]
	
	
	==> kube-controller-manager [26b1ab05f99a] <==
	I0229 02:18:53.368926       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-314500-m02" podCIDRs=["10.244.1.0/24"]
	I0229 02:18:53.372475       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-4gbrl"
	I0229 02:18:53.376875       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-6r7b8"
	I0229 02:18:54.492680       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-314500-m02"
	I0229 02:18:54.493161       1 event.go:307] "Event occurred" object="multinode-314500-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-314500-m02 event: Registered Node multinode-314500-m02 in Controller"
	I0229 02:19:09.849595       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-314500-m02"
	I0229 02:19:34.656812       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5b5d89c9d6 to 2"
	I0229 02:19:34.678854       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-826w2"
	I0229 02:19:34.689390       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-qcblm"
	I0229 02:19:34.698278       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="40.961829ms"
	I0229 02:19:34.725163       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="26.446345ms"
	I0229 02:19:34.739405       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="13.836452ms"
	I0229 02:19:34.740025       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="46.602µs"
	I0229 02:19:36.713325       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="8.816271ms"
	I0229 02:19:36.713610       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="108.606µs"
	I0229 02:19:37.478878       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="9.961832ms"
	I0229 02:19:37.479378       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="145.408µs"
	I0229 02:28:50.460621       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-314500-m03\" does not exist"
	I0229 02:28:50.461321       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-314500-m02"
	I0229 02:28:50.475928       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-314500-m03" podCIDRs=["10.244.2.0/24"]
	I0229 02:28:50.491768       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-7g9t8"
	I0229 02:28:50.498647       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-zvlt2"
	I0229 02:28:54.626163       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-314500-m03"
	I0229 02:28:54.626579       1 event.go:307] "Event occurred" object="multinode-314500-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-314500-m03 event: Registered Node multinode-314500-m03 in Controller"
	I0229 02:29:07.589769       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-314500-m02"
	
	
	==> kube-controller-manager [f1cb36bcb3f3] <==
	I0229 02:33:55.587534       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="118.283µs"
	I0229 02:35:23.906017       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-vh2zk"
	I0229 02:35:23.923776       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="29.285997ms"
	I0229 02:35:23.938526       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="14.581013ms"
	I0229 02:35:23.953937       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="15.320251ms"
	I0229 02:35:23.954093       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="42.19µs"
	I0229 02:35:28.332987       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-314500-m02\" does not exist"
	I0229 02:35:28.335731       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-826w2" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-826w2"
	I0229 02:35:28.344556       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-314500-m02" podCIDRs=["10.244.1.0/24"]
	I0229 02:35:29.195303       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="76.883µs"
	I0229 02:35:33.466693       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-314500-m02"
	I0229 02:35:33.491809       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="70.686µs"
	I0229 02:35:35.604478       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-826w2" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-826w2"
	I0229 02:35:43.315234       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="135.372µs"
	I0229 02:35:43.326027       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="244.15µs"
	I0229 02:35:43.342657       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="74.085µs"
	I0229 02:35:43.599445       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="77.784µs"
	I0229 02:35:43.602538       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="53.089µs"
	I0229 02:35:44.636172       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="13.402192ms"
	I0229 02:35:44.637027       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="366.726µs"
	I0229 02:37:26.601733       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-314500-m02"
	I0229 02:37:27.923902       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-314500-m02"
	I0229 02:37:27.926815       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-314500-m03\" does not exist"
	I0229 02:37:27.944211       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-314500-m03" podCIDRs=["10.244.2.0/24"]
	I0229 02:37:36.144739       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-314500-m02"
	
	
	==> kube-proxy [341278d602dd] <==
	I0229 02:33:04.215978       1 server_others.go:69] "Using iptables proxy"
	I0229 02:33:04.251984       1 node.go:141] Successfully retrieved node IP: 172.19.2.252
	I0229 02:33:04.360615       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0229 02:33:04.360657       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0229 02:33:04.365625       1 server_others.go:152] "Using iptables Proxier"
	I0229 02:33:04.368633       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0229 02:33:04.369106       1 server.go:846] "Version info" version="v1.28.4"
	I0229 02:33:04.369119       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 02:33:04.372544       1 config.go:188] "Starting service config controller"
	I0229 02:33:04.374189       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0229 02:33:04.374236       1 config.go:315] "Starting node config controller"
	I0229 02:33:04.374243       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0229 02:33:04.381822       1 config.go:97] "Starting endpoint slice config controller"
	I0229 02:33:04.381894       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0229 02:33:04.475033       1 shared_informer.go:318] Caches are synced for service config
	I0229 02:33:04.475731       1 shared_informer.go:318] Caches are synced for node config
	I0229 02:33:04.482714       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [c93e33130746] <==
	I0229 02:16:07.488822       1 server_others.go:69] "Using iptables proxy"
	I0229 02:16:07.511408       1 node.go:141] Successfully retrieved node IP: 172.19.2.165
	I0229 02:16:07.646052       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0229 02:16:07.646080       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0229 02:16:07.652114       1 server_others.go:152] "Using iptables Proxier"
	I0229 02:16:07.652346       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0229 02:16:07.652698       1 server.go:846] "Version info" version="v1.28.4"
	I0229 02:16:07.652712       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 02:16:07.654751       1 config.go:188] "Starting service config controller"
	I0229 02:16:07.655126       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0229 02:16:07.655241       1 config.go:97] "Starting endpoint slice config controller"
	I0229 02:16:07.655327       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0229 02:16:07.656324       1 config.go:315] "Starting node config controller"
	I0229 02:16:07.676099       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0229 02:16:07.679653       1 shared_informer.go:318] Caches are synced for node config
	I0229 02:16:07.757691       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0229 02:16:07.757737       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [41745010357f] <==
	I0229 02:32:59.773752       1 serving.go:348] Generated self-signed cert in-memory
	W0229 02:33:02.542906       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0229 02:33:02.543166       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0229 02:33:02.543526       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0229 02:33:02.543686       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0229 02:33:02.659015       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0229 02:33:02.659400       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 02:33:02.665902       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0229 02:33:02.666208       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0229 02:33:02.666489       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0229 02:33:02.667821       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0229 02:33:02.768883       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [ab0c4864aee5] <==
	E0229 02:15:49.044214       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0229 02:15:49.085996       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0229 02:15:49.086626       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0229 02:15:49.106158       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0229 02:15:49.106848       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0229 02:15:49.126181       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0229 02:15:49.126580       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0229 02:15:49.196878       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0229 02:15:49.196987       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0229 02:15:49.236282       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0229 02:15:49.236658       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0229 02:15:49.372072       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0229 02:15:49.372116       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0229 02:15:49.403666       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0229 02:15:49.403942       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0229 02:15:49.418593       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0229 02:15:49.418838       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0229 02:15:49.492335       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0229 02:15:49.492758       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0229 02:15:49.585577       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0229 02:15:49.585986       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0229 02:15:52.113114       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0229 02:29:56.745696       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0229 02:29:56.745748       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0229 02:29:56.745974       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Feb 29 02:33:57 multinode-314500 kubelet[1432]: E0229 02:33:57.835184    1432 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 02:33:57 multinode-314500 kubelet[1432]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 02:33:57 multinode-314500 kubelet[1432]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 02:33:57 multinode-314500 kubelet[1432]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 02:33:57 multinode-314500 kubelet[1432]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 02:34:57 multinode-314500 kubelet[1432]: E0229 02:34:57.834650    1432 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 02:34:57 multinode-314500 kubelet[1432]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 02:34:57 multinode-314500 kubelet[1432]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 02:34:57 multinode-314500 kubelet[1432]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 02:34:57 multinode-314500 kubelet[1432]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 02:35:57 multinode-314500 kubelet[1432]: E0229 02:35:57.834136    1432 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 02:35:57 multinode-314500 kubelet[1432]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 02:35:57 multinode-314500 kubelet[1432]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 02:35:57 multinode-314500 kubelet[1432]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 02:35:57 multinode-314500 kubelet[1432]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 02:36:57 multinode-314500 kubelet[1432]: E0229 02:36:57.833710    1432 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 02:36:57 multinode-314500 kubelet[1432]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 02:36:57 multinode-314500 kubelet[1432]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 02:36:57 multinode-314500 kubelet[1432]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 02:36:57 multinode-314500 kubelet[1432]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 02:37:57 multinode-314500 kubelet[1432]: E0229 02:37:57.840061    1432 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 02:37:57 multinode-314500 kubelet[1432]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 02:37:57 multinode-314500 kubelet[1432]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 02:37:57 multinode-314500 kubelet[1432]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 02:37:57 multinode-314500 kubelet[1432]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 02:37:49.930849    8588 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-314500 -n multinode-314500
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-314500 -n multinode-314500: (11.2973901s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-314500 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (508.81s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (190.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:382: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-314500 --wait=true -v=8 --alsologtostderr --driver=hyperv
multinode_test.go:382: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-314500 --wait=true -v=8 --alsologtostderr --driver=hyperv: exit status 1 (2m37.4695329s)

                                                
                                                
-- stdout --
	* [multinode-314500] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18063
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting control plane node multinode-314500 in cluster multinode-314500
	* Restarting existing hyperv VM for "multinode-314500" ...
	* Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	* Configuring CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Starting worker node multinode-314500-m02 in cluster multinode-314500
	* Restarting existing hyperv VM for "multinode-314500-m02" ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 02:40:22.777683    1532 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0229 02:40:22.834525    1532 out.go:291] Setting OutFile to fd 1404 ...
	I0229 02:40:22.835418    1532 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:40:22.835465    1532 out.go:304] Setting ErrFile to fd 1460...
	I0229 02:40:22.835465    1532 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:40:22.854125    1532 out.go:298] Setting JSON to false
	I0229 02:40:22.857151    1532 start.go:129] hostinfo: {"hostname":"minikube5","uptime":270649,"bootTime":1708903773,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0229 02:40:22.857151    1532 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 02:40:22.858119    1532 out.go:177] * [multinode-314500] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0229 02:40:22.859160    1532 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 02:40:22.859160    1532 notify.go:220] Checking for updates...
	I0229 02:40:22.860125    1532 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 02:40:22.860125    1532 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0229 02:40:22.860125    1532 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 02:40:22.861129    1532 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 02:40:22.862130    1532 config.go:182] Loaded profile config "multinode-314500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 02:40:22.863132    1532 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 02:40:27.896918    1532 out.go:177] * Using the hyperv driver based on existing profile
	I0229 02:40:27.897733    1532 start.go:299] selected driver: hyperv
	I0229 02:40:27.897733    1532 start.go:903] validating driver "hyperv" against &{Name:multinode-314500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:multinode-314500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.19.2.252 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.4.42 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false k
ubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVM
netClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:40:27.897733    1532 start.go:914] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 02:40:27.943386    1532 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 02:40:27.943386    1532 cni.go:84] Creating CNI manager for ""
	I0229 02:40:27.943910    1532 cni.go:136] 2 nodes found, recommending kindnet
	I0229 02:40:27.943980    1532 start_flags.go:323] config:
	{Name:multinode-314500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-314500 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.19.2.252 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.4.42 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false
nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoP
auseInterval:1m0s}
	I0229 02:40:27.944621    1532 iso.go:125] acquiring lock: {Name:mk91f2ee29fbed5605669750e8cfa308a1229357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:40:27.945916    1532 out.go:177] * Starting control plane node multinode-314500 in cluster multinode-314500
	I0229 02:40:27.946502    1532 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 02:40:27.946573    1532 preload.go:148] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0229 02:40:27.946706    1532 cache.go:56] Caching tarball of preloaded images
	I0229 02:40:27.946966    1532 preload.go:174] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 02:40:27.946966    1532 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0229 02:40:27.946966    1532 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\config.json ...
	I0229 02:40:27.948916    1532 start.go:365] acquiring machines lock for multinode-314500: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 02:40:27.948916    1532 start.go:369] acquired machines lock for "multinode-314500" in 0s
	I0229 02:40:27.948916    1532 start.go:96] Skipping create...Using existing machine configuration
	I0229 02:40:27.948916    1532 fix.go:54] fixHost starting: 
	I0229 02:40:27.949876    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:40:30.565033    1532 main.go:141] libmachine: [stdout =====>] : Off
	
	I0229 02:40:30.565033    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:40:30.565366    1532 fix.go:102] recreateIfNeeded on multinode-314500: state=Stopped err=<nil>
	W0229 02:40:30.565439    1532 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 02:40:30.566359    1532 out.go:177] * Restarting existing hyperv VM for "multinode-314500" ...
	I0229 02:40:30.566983    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-314500
	I0229 02:40:33.280525    1532 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:40:33.280525    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:40:33.280603    1532 main.go:141] libmachine: Waiting for host to start...
	I0229 02:40:33.280603    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:40:35.410766    1532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:40:35.410766    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:40:35.411454    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:40:37.786830    1532 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:40:37.786830    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:40:38.791448    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:40:40.872557    1532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:40:40.872557    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:40:40.872623    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:40:43.245619    1532 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:40:43.245619    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:40:44.253451    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:40:46.352087    1532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:40:46.352087    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:40:46.352087    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:40:48.740536    1532 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:40:48.740536    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:40:49.744234    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:40:51.834642    1532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:40:51.834845    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:40:51.834937    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:40:54.178970    1532 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:40:54.178970    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:40:55.185017    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:40:57.216991    1532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:40:57.216991    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:40:57.217189    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:40:59.641935    1532 main.go:141] libmachine: [stdout =====>] : 172.19.2.238
	
	I0229 02:40:59.642528    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:40:59.645053    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:41:01.654539    1532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:41:01.654539    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:41:01.654627    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:41:04.090413    1532 main.go:141] libmachine: [stdout =====>] : 172.19.2.238
	
	I0229 02:41:04.090413    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:41:04.090413    1532 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\config.json ...
	I0229 02:41:04.093506    1532 machine.go:88] provisioning docker machine ...
	I0229 02:41:04.093506    1532 buildroot.go:166] provisioning hostname "multinode-314500"
	I0229 02:41:04.093506    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:41:06.093944    1532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:41:06.094952    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:41:06.094997    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:41:08.516462    1532 main.go:141] libmachine: [stdout =====>] : 172.19.2.238
	
	I0229 02:41:08.516462    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:41:08.521484    1532 main.go:141] libmachine: Using SSH client type: native
	I0229 02:41:08.521746    1532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.2.238 22 <nil> <nil>}
	I0229 02:41:08.521746    1532 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-314500 && echo "multinode-314500" | sudo tee /etc/hostname
	I0229 02:41:08.696369    1532 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-314500
	
	I0229 02:41:08.696369    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:41:10.744486    1532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:41:10.744486    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:41:10.744486    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:41:13.163550    1532 main.go:141] libmachine: [stdout =====>] : 172.19.2.238
	
	I0229 02:41:13.163550    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:41:13.168162    1532 main.go:141] libmachine: Using SSH client type: native
	I0229 02:41:13.168162    1532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.2.238 22 <nil> <nil>}
	I0229 02:41:13.168162    1532 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-314500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-314500/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-314500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:41:13.329613    1532 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:41:13.329830    1532 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0229 02:41:13.329830    1532 buildroot.go:174] setting up certificates
	I0229 02:41:13.329830    1532 provision.go:83] configureAuth start
	I0229 02:41:13.329923    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:41:15.326770    1532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:41:15.327541    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:41:15.327578    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:41:17.726055    1532 main.go:141] libmachine: [stdout =====>] : 172.19.2.238
	
	I0229 02:41:17.726055    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:41:17.726055    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:41:19.759154    1532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:41:19.759154    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:41:19.759678    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:41:22.186855    1532 main.go:141] libmachine: [stdout =====>] : 172.19.2.238
	
	I0229 02:41:22.186855    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:41:22.186855    1532 provision.go:138] copyHostCerts
	I0229 02:41:22.187571    1532 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0229 02:41:22.187669    1532 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0229 02:41:22.187669    1532 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0229 02:41:22.187669    1532 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0229 02:41:22.189039    1532 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0229 02:41:22.189274    1532 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0229 02:41:22.189274    1532 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0229 02:41:22.189566    1532 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0229 02:41:22.190325    1532 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0229 02:41:22.190587    1532 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0229 02:41:22.190648    1532 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0229 02:41:22.190648    1532 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1675 bytes)
	I0229 02:41:22.191700    1532 provision.go:112] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-314500 san=[172.19.2.238 172.19.2.238 localhost 127.0.0.1 minikube multinode-314500]
	I0229 02:41:22.671350    1532 provision.go:172] copyRemoteCerts
	I0229 02:41:22.680652    1532 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:41:22.680794    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:41:24.699199    1532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:41:24.699245    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:41:24.699306    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:41:27.138104    1532 main.go:141] libmachine: [stdout =====>] : 172.19.2.238
	
	I0229 02:41:27.138104    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:41:27.138104    1532 sshutil.go:53] new ssh client: &{IP:172.19.2.238 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\id_rsa Username:docker}
	I0229 02:41:27.254026    1532 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5729449s)
	I0229 02:41:27.254115    1532 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0229 02:41:27.254247    1532 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 02:41:27.298985    1532 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0229 02:41:27.299294    1532 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1224 bytes)
	I0229 02:41:27.347314    1532 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0229 02:41:27.347775    1532 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 02:41:27.394771    1532 provision.go:86] duration metric: configureAuth took 14.0640677s
	I0229 02:41:27.394771    1532 buildroot.go:189] setting minikube options for container-runtime
	I0229 02:41:27.395476    1532 config.go:182] Loaded profile config "multinode-314500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 02:41:27.395476    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:41:29.453121    1532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:41:29.453861    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:41:29.453939    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:41:31.860828    1532 main.go:141] libmachine: [stdout =====>] : 172.19.2.238
	
	I0229 02:41:31.861114    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:41:31.867223    1532 main.go:141] libmachine: Using SSH client type: native
	I0229 02:41:31.867745    1532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.2.238 22 <nil> <nil>}
	I0229 02:41:31.867837    1532 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 02:41:32.016154    1532 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0229 02:41:32.016241    1532 buildroot.go:70] root file system type: tmpfs
	I0229 02:41:32.016443    1532 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 02:41:32.016526    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:41:34.019135    1532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:41:34.019135    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:41:34.019210    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:41:36.440661    1532 main.go:141] libmachine: [stdout =====>] : 172.19.2.238
	
	I0229 02:41:36.440953    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:41:36.445080    1532 main.go:141] libmachine: Using SSH client type: native
	I0229 02:41:36.445494    1532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.2.238 22 <nil> <nil>}
	I0229 02:41:36.445494    1532 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 02:41:36.638749    1532 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 02:41:36.638749    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:41:38.699052    1532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:41:38.699052    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:41:38.699052    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:41:41.118719    1532 main.go:141] libmachine: [stdout =====>] : 172.19.2.238
	
	I0229 02:41:41.118719    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:41:41.123562    1532 main.go:141] libmachine: Using SSH client type: native
	I0229 02:41:41.124008    1532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.2.238 22 <nil> <nil>}
	I0229 02:41:41.124074    1532 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 02:41:42.558705    1532 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0229 02:41:42.559253    1532 machine.go:91] provisioned docker machine in 38.4636118s
	I0229 02:41:42.559253    1532 start.go:300] post-start starting for "multinode-314500" (driver="hyperv")
	I0229 02:41:42.559313    1532 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:41:42.568473    1532 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:41:42.568473    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:41:44.582129    1532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:41:44.582129    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:41:44.582201    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:41:46.982912    1532 main.go:141] libmachine: [stdout =====>] : 172.19.2.238
	
	I0229 02:41:46.983111    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:41:46.983218    1532 sshutil.go:53] new ssh client: &{IP:172.19.2.238 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\id_rsa Username:docker}
	I0229 02:41:47.088469    1532 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5197449s)
	I0229 02:41:47.098906    1532 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:41:47.105968    1532 command_runner.go:130] > NAME=Buildroot
	I0229 02:41:47.105968    1532 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0229 02:41:47.105968    1532 command_runner.go:130] > ID=buildroot
	I0229 02:41:47.105968    1532 command_runner.go:130] > VERSION_ID=2023.02.9
	I0229 02:41:47.105968    1532 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0229 02:41:47.105968    1532 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 02:41:47.105968    1532 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0229 02:41:47.106822    1532 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0229 02:41:47.107546    1532 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem -> 33122.pem in /etc/ssl/certs
	I0229 02:41:47.107546    1532 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem -> /etc/ssl/certs/33122.pem
	I0229 02:41:47.116966    1532 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 02:41:47.136951    1532 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem --> /etc/ssl/certs/33122.pem (1708 bytes)
	I0229 02:41:47.183763    1532 start.go:303] post-start completed in 4.6241932s
	I0229 02:41:47.183763    1532 fix.go:56] fixHost completed within 1m19.2304508s
	I0229 02:41:47.183763    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:41:49.198966    1532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:41:49.198966    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:41:49.198966    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:41:51.582311    1532 main.go:141] libmachine: [stdout =====>] : 172.19.2.238
	
	I0229 02:41:51.583115    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:41:51.587985    1532 main.go:141] libmachine: Using SSH client type: native
	I0229 02:41:51.588686    1532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.2.238 22 <nil> <nil>}
	I0229 02:41:51.588686    1532 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0229 02:41:51.731414    1532 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709174511.891458216
	
	I0229 02:41:51.731414    1532 fix.go:206] guest clock: 1709174511.891458216
	I0229 02:41:51.731414    1532 fix.go:219] Guest: 2024-02-29 02:41:51.891458216 +0000 UTC Remote: 2024-02-29 02:41:47.183763 +0000 UTC m=+84.494854101 (delta=4.707695216s)
	I0229 02:41:51.731414    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:41:53.759386    1532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:41:53.760162    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:41:53.760162    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:41:56.204141    1532 main.go:141] libmachine: [stdout =====>] : 172.19.2.238
	
	I0229 02:41:56.204141    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:41:56.206039    1532 main.go:141] libmachine: Using SSH client type: native
	I0229 02:41:56.206039    1532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.2.238 22 <nil> <nil>}
	I0229 02:41:56.206039    1532 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709174511
	I0229 02:41:56.368921    1532 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Feb 29 02:41:51 UTC 2024
	
	I0229 02:41:56.368921    1532 fix.go:226] clock set: Thu Feb 29 02:41:51 UTC 2024
	 (err=<nil>)
	I0229 02:41:56.368921    1532 start.go:83] releasing machines lock for "multinode-314500", held for 1m28.4150988s
	I0229 02:41:56.369147    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:41:58.382753    1532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:41:58.382753    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:41:58.382753    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:42:00.755552    1532 main.go:141] libmachine: [stdout =====>] : 172.19.2.238
	
	I0229 02:42:00.755945    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:42:00.760699    1532 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:42:00.760802    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:42:00.766702    1532 ssh_runner.go:195] Run: cat /version.json
	I0229 02:42:00.766702    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:42:02.779747    1532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:42:02.779747    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:42:02.779848    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:42:02.782057    1532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:42:02.782057    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:42:02.782274    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:42:05.266914    1532 main.go:141] libmachine: [stdout =====>] : 172.19.2.238
	
	I0229 02:42:05.266914    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:42:05.267586    1532 sshutil.go:53] new ssh client: &{IP:172.19.2.238 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\id_rsa Username:docker}
	I0229 02:42:05.290813    1532 main.go:141] libmachine: [stdout =====>] : 172.19.2.238
	
	I0229 02:42:05.291067    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:42:05.291227    1532 sshutil.go:53] new ssh client: &{IP:172.19.2.238 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\id_rsa Username:docker}
	I0229 02:42:05.376179    1532 command_runner.go:130] > {"iso_version": "v1.32.1-1708638130-18020", "kicbase_version": "v0.0.42-1708008208-17936", "minikube_version": "v1.32.0", "commit": "d80143d2abd5a004b09b48bbc118a104326900af"}
	I0229 02:42:05.376329    1532 ssh_runner.go:235] Completed: cat /version.json: (4.6093709s)
	I0229 02:42:05.388449    1532 ssh_runner.go:195] Run: systemctl --version
	I0229 02:42:05.507931    1532 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0229 02:42:05.508138    1532 command_runner.go:130] > systemd 252 (252)
	I0229 02:42:05.508138    1532 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.7471387s)
	I0229 02:42:05.508138    1532 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0229 02:42:05.517184    1532 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0229 02:42:05.525754    1532 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0229 02:42:05.525754    1532 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 02:42:05.536848    1532 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:42:05.564981    1532 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0229 02:42:05.565079    1532 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 02:42:05.565079    1532 start.go:475] detecting cgroup driver to use...
	I0229 02:42:05.565482    1532 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:42:05.599297    1532 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0229 02:42:05.608280    1532 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0229 02:42:05.637070    1532 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 02:42:05.656188    1532 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 02:42:05.664958    1532 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 02:42:05.693329    1532 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 02:42:05.721902    1532 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 02:42:05.750212    1532 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 02:42:05.777556    1532 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:42:05.807365    1532 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 02:42:05.835742    1532 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:42:05.854932    1532 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0229 02:42:05.863887    1532 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:42:05.890144    1532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:42:06.097343    1532 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 02:42:06.129556    1532 start.go:475] detecting cgroup driver to use...
	I0229 02:42:06.140526    1532 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 02:42:06.166113    1532 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0229 02:42:06.166113    1532 command_runner.go:130] > [Unit]
	I0229 02:42:06.166113    1532 command_runner.go:130] > Description=Docker Application Container Engine
	I0229 02:42:06.166113    1532 command_runner.go:130] > Documentation=https://docs.docker.com
	I0229 02:42:06.166113    1532 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0229 02:42:06.166113    1532 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0229 02:42:06.166113    1532 command_runner.go:130] > StartLimitBurst=3
	I0229 02:42:06.166113    1532 command_runner.go:130] > StartLimitIntervalSec=60
	I0229 02:42:06.166113    1532 command_runner.go:130] > [Service]
	I0229 02:42:06.166113    1532 command_runner.go:130] > Type=notify
	I0229 02:42:06.166113    1532 command_runner.go:130] > Restart=on-failure
	I0229 02:42:06.166113    1532 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0229 02:42:06.167115    1532 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0229 02:42:06.167115    1532 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0229 02:42:06.167115    1532 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0229 02:42:06.167115    1532 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0229 02:42:06.167115    1532 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0229 02:42:06.167115    1532 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0229 02:42:06.167115    1532 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0229 02:42:06.167115    1532 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0229 02:42:06.167115    1532 command_runner.go:130] > ExecStart=
	I0229 02:42:06.167115    1532 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0229 02:42:06.167115    1532 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0229 02:42:06.167115    1532 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0229 02:42:06.167115    1532 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0229 02:42:06.167115    1532 command_runner.go:130] > LimitNOFILE=infinity
	I0229 02:42:06.167115    1532 command_runner.go:130] > LimitNPROC=infinity
	I0229 02:42:06.167115    1532 command_runner.go:130] > LimitCORE=infinity
	I0229 02:42:06.167115    1532 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0229 02:42:06.167115    1532 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0229 02:42:06.167115    1532 command_runner.go:130] > TasksMax=infinity
	I0229 02:42:06.167115    1532 command_runner.go:130] > TimeoutStartSec=0
	I0229 02:42:06.167115    1532 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0229 02:42:06.167115    1532 command_runner.go:130] > Delegate=yes
	I0229 02:42:06.167115    1532 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0229 02:42:06.167115    1532 command_runner.go:130] > KillMode=process
	I0229 02:42:06.167115    1532 command_runner.go:130] > [Install]
	I0229 02:42:06.167115    1532 command_runner.go:130] > WantedBy=multi-user.target
	I0229 02:42:06.176637    1532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:42:06.206705    1532 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 02:42:06.242628    1532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:42:06.280954    1532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 02:42:06.312303    1532 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 02:42:06.362775    1532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 02:42:06.385494    1532 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:42:06.418911    1532 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0229 02:42:06.429451    1532 ssh_runner.go:195] Run: which cri-dockerd
	I0229 02:42:06.434887    1532 command_runner.go:130] > /usr/bin/cri-dockerd
	I0229 02:42:06.444028    1532 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 02:42:06.460928    1532 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0229 02:42:06.503181    1532 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 02:42:06.712738    1532 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 02:42:06.915962    1532 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 02:42:06.916311    1532 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 02:42:06.960512    1532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:42:07.163380    1532 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 02:42:08.798372    1532 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.6349004s)
	I0229 02:42:08.808627    1532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0229 02:42:08.843561    1532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 02:42:08.876982    1532 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0229 02:42:09.089675    1532 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0229 02:42:09.283179    1532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:42:09.504491    1532 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0229 02:42:09.546886    1532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 02:42:09.582352    1532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:42:09.774487    1532 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0229 02:42:09.879578    1532 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0229 02:42:09.888818    1532 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0229 02:42:09.898969    1532 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0229 02:42:09.898969    1532 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0229 02:42:09.898969    1532 command_runner.go:130] > Device: 0,22	Inode: 858         Links: 1
	I0229 02:42:09.898969    1532 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0229 02:42:09.898969    1532 command_runner.go:130] > Access: 2024-02-29 02:42:09.968763905 +0000
	I0229 02:42:09.898969    1532 command_runner.go:130] > Modify: 2024-02-29 02:42:09.968763905 +0000
	I0229 02:42:09.898969    1532 command_runner.go:130] > Change: 2024-02-29 02:42:09.973764265 +0000
	I0229 02:42:09.898969    1532 command_runner.go:130] >  Birth: -
	I0229 02:42:09.898969    1532 start.go:543] Will wait 60s for crictl version
	I0229 02:42:09.910720    1532 ssh_runner.go:195] Run: which crictl
	I0229 02:42:09.917529    1532 command_runner.go:130] > /usr/bin/crictl
	I0229 02:42:09.925899    1532 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 02:42:10.004576    1532 command_runner.go:130] > Version:  0.1.0
	I0229 02:42:10.004576    1532 command_runner.go:130] > RuntimeName:  docker
	I0229 02:42:10.004576    1532 command_runner.go:130] > RuntimeVersion:  24.0.7
	I0229 02:42:10.004576    1532 command_runner.go:130] > RuntimeApiVersion:  v1
	I0229 02:42:10.004576    1532 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0229 02:42:10.012089    1532 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 02:42:10.042590    1532 command_runner.go:130] > 24.0.7
	I0229 02:42:10.051675    1532 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 02:42:10.084994    1532 command_runner.go:130] > 24.0.7
	I0229 02:42:10.087099    1532 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0229 02:42:10.087414    1532 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0229 02:42:10.092456    1532 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0229 02:42:10.092778    1532 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0229 02:42:10.092778    1532 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0229 02:42:10.092778    1532 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:a6:a3:c1 Flags:up|broadcast|multicast|running}
	I0229 02:42:10.095994    1532 ip.go:210] interface addr: fe80::fc78:4865:5cac:d448/64
	I0229 02:42:10.095994    1532 ip.go:210] interface addr: 172.19.0.1/20
	I0229 02:42:10.105006    1532 ssh_runner.go:195] Run: grep 172.19.0.1	host.minikube.internal$ /etc/hosts
	I0229 02:42:10.112690    1532 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:42:10.136098    1532 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 02:42:10.144183    1532 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 02:42:10.177878    1532 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0229 02:42:10.177913    1532 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0229 02:42:10.177913    1532 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0229 02:42:10.177913    1532 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0229 02:42:10.177913    1532 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0229 02:42:10.177913    1532 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0229 02:42:10.177913    1532 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0229 02:42:10.177913    1532 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0229 02:42:10.177913    1532 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:42:10.177913    1532 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0229 02:42:10.177913    1532 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0229 02:42:10.177913    1532 docker.go:615] Images already preloaded, skipping extraction
	I0229 02:42:10.188018    1532 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 02:42:10.215735    1532 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0229 02:42:10.216589    1532 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0229 02:42:10.216589    1532 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0229 02:42:10.216589    1532 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0229 02:42:10.216589    1532 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0229 02:42:10.216589    1532 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0229 02:42:10.216660    1532 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0229 02:42:10.216660    1532 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0229 02:42:10.216660    1532 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:42:10.216660    1532 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0229 02:42:10.217527    1532 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0229 02:42:10.217599    1532 cache_images.go:84] Images are preloaded, skipping loading
	I0229 02:42:10.223963    1532 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0229 02:42:10.265074    1532 command_runner.go:130] > cgroupfs
	I0229 02:42:10.266245    1532 cni.go:84] Creating CNI manager for ""
	I0229 02:42:10.266571    1532 cni.go:136] 2 nodes found, recommending kindnet
	I0229 02:42:10.266636    1532 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 02:42:10.266810    1532 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.2.238 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-314500 NodeName:multinode-314500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.2.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.2.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 02:42:10.267257    1532 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.2.238
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-314500"
	  kubeletExtraArgs:
	    node-ip: 172.19.2.238
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.2.238"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 02:42:10.267324    1532 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-314500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.2.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-314500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 02:42:10.279608    1532 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 02:42:10.298239    1532 command_runner.go:130] > kubeadm
	I0229 02:42:10.298645    1532 command_runner.go:130] > kubectl
	I0229 02:42:10.298645    1532 command_runner.go:130] > kubelet
	I0229 02:42:10.298689    1532 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 02:42:10.309157    1532 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 02:42:10.327724    1532 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I0229 02:42:10.360450    1532 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 02:42:10.392713    1532 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0229 02:42:10.445623    1532 ssh_runner.go:195] Run: grep 172.19.2.238	control-plane.minikube.internal$ /etc/hosts
	I0229 02:42:10.451977    1532 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.2.238	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:42:10.475979    1532 certs.go:56] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500 for IP: 172.19.2.238
	I0229 02:42:10.475979    1532 certs.go:190] acquiring lock for shared ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:42:10.476620    1532 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0229 02:42:10.476867    1532 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0229 02:42:10.477689    1532 certs.go:315] skipping minikube-user signed cert generation: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\client.key
	I0229 02:42:10.477853    1532 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.key.a332be12
	I0229 02:42:10.477937    1532 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.crt.a332be12 with IP's: [172.19.2.238 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 02:42:10.818670    1532 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.crt.a332be12 ...
	I0229 02:42:10.818670    1532 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.crt.a332be12: {Name:mk3ff66c4da8459c2353911ccafdd38e8120ad31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:42:10.820838    1532 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.key.a332be12 ...
	I0229 02:42:10.820838    1532 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.key.a332be12: {Name:mk8c3a0e50e51af8a0d05e6aeeb6785226bd1a78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:42:10.821202    1532 certs.go:337] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.crt.a332be12 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.crt
	I0229 02:42:10.834129    1532 certs.go:341] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.key.a332be12 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.key
	I0229 02:42:10.836178    1532 certs.go:315] skipping aggregator signed cert generation: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\proxy-client.key
	I0229 02:42:10.836178    1532 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0229 02:42:10.836536    1532 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0229 02:42:10.836863    1532 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0229 02:42:10.836863    1532 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0229 02:42:10.836863    1532 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0229 02:42:10.836863    1532 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0229 02:42:10.837501    1532 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0229 02:42:10.837598    1532 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0229 02:42:10.838108    1532 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3312.pem (1338 bytes)
	W0229 02:42:10.838250    1532 certs.go:433] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3312_empty.pem, impossibly tiny 0 bytes
	I0229 02:42:10.838558    1532 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0229 02:42:10.838716    1532 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0229 02:42:10.838716    1532 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0229 02:42:10.838716    1532 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0229 02:42:10.839455    1532 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem (1708 bytes)
	I0229 02:42:10.839598    1532 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3312.pem -> /usr/share/ca-certificates/3312.pem
	I0229 02:42:10.839670    1532 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem -> /usr/share/ca-certificates/33122.pem
	I0229 02:42:10.839670    1532 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:42:10.840943    1532 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 02:42:10.890064    1532 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 02:42:10.939684    1532 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 02:42:10.984389    1532 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0229 02:42:11.029094    1532 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 02:42:11.075299    1532 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 02:42:11.125697    1532 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 02:42:11.171969    1532 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 02:42:11.222096    1532 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3312.pem --> /usr/share/ca-certificates/3312.pem (1338 bytes)
	I0229 02:42:11.266029    1532 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem --> /usr/share/ca-certificates/33122.pem (1708 bytes)
	I0229 02:42:11.317063    1532 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 02:42:11.361913    1532 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 02:42:11.405569    1532 ssh_runner.go:195] Run: openssl version
	I0229 02:42:11.413951    1532 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0229 02:42:11.424695    1532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/33122.pem && ln -fs /usr/share/ca-certificates/33122.pem /etc/ssl/certs/33122.pem"
	I0229 02:42:11.453890    1532 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/33122.pem
	I0229 02:42:11.462287    1532 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 29 00:59 /usr/share/ca-certificates/33122.pem
	I0229 02:42:11.462446    1532 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 00:59 /usr/share/ca-certificates/33122.pem
	I0229 02:42:11.471326    1532 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/33122.pem
	I0229 02:42:11.481128    1532 command_runner.go:130] > 3ec20f2e
	I0229 02:42:11.490731    1532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/33122.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 02:42:11.519460    1532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 02:42:11.551975    1532 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:42:11.559902    1532 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 29 00:45 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:42:11.560002    1532 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 00:45 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:42:11.568323    1532 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:42:11.575353    1532 command_runner.go:130] > b5213941
	I0229 02:42:11.586279    1532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 02:42:11.616513    1532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3312.pem && ln -fs /usr/share/ca-certificates/3312.pem /etc/ssl/certs/3312.pem"
	I0229 02:42:11.648476    1532 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3312.pem
	I0229 02:42:11.655490    1532 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 29 00:59 /usr/share/ca-certificates/3312.pem
	I0229 02:42:11.656578    1532 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 00:59 /usr/share/ca-certificates/3312.pem
	I0229 02:42:11.665391    1532 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3312.pem
	I0229 02:42:11.674597    1532 command_runner.go:130] > 51391683
	I0229 02:42:11.683686    1532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3312.pem /etc/ssl/certs/51391683.0"
	I0229 02:42:11.713263    1532 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 02:42:11.721858    1532 command_runner.go:130] > ca.crt
	I0229 02:42:11.721858    1532 command_runner.go:130] > ca.key
	I0229 02:42:11.721858    1532 command_runner.go:130] > healthcheck-client.crt
	I0229 02:42:11.721858    1532 command_runner.go:130] > healthcheck-client.key
	I0229 02:42:11.721858    1532 command_runner.go:130] > peer.crt
	I0229 02:42:11.721858    1532 command_runner.go:130] > peer.key
	I0229 02:42:11.721858    1532 command_runner.go:130] > server.crt
	I0229 02:42:11.721858    1532 command_runner.go:130] > server.key
	I0229 02:42:11.731210    1532 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 02:42:11.741808    1532 command_runner.go:130] > Certificate will not expire
	I0229 02:42:11.752797    1532 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 02:42:11.763403    1532 command_runner.go:130] > Certificate will not expire
	I0229 02:42:11.772221    1532 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 02:42:11.783216    1532 command_runner.go:130] > Certificate will not expire
	I0229 02:42:11.793229    1532 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 02:42:11.803076    1532 command_runner.go:130] > Certificate will not expire
	I0229 02:42:11.813608    1532 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 02:42:11.824698    1532 command_runner.go:130] > Certificate will not expire
	I0229 02:42:11.835071    1532 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 02:42:11.846370    1532 command_runner.go:130] > Certificate will not expire
	I0229 02:42:11.846370    1532 kubeadm.go:404] StartCluster: {Name:multinode-314500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.28.4 ClusterName:multinode-314500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.19.2.238 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.4.42 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevi
rt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socke
tVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:42:11.854019    1532 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 02:42:11.893381    1532 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 02:42:11.913052    1532 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0229 02:42:11.913052    1532 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0229 02:42:11.913052    1532 command_runner.go:130] > /var/lib/minikube/etcd:
	I0229 02:42:11.913052    1532 command_runner.go:130] > member
	I0229 02:42:11.913052    1532 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 02:42:11.913052    1532 kubeadm.go:636] restartCluster start
	I0229 02:42:11.925527    1532 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 02:42:11.943318    1532 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:42:11.944330    1532 kubeconfig.go:135] verify returned: extract IP: "multinode-314500" does not appear in C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 02:42:11.944330    1532 kubeconfig.go:146] "multinode-314500" context is missing from C:\Users\jenkins.minikube5\minikube-integration\kubeconfig - will repair!
	I0229 02:42:11.945327    1532 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:42:11.957313    1532 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 02:42:11.958320    1532 kapi.go:59] client config for multinode-314500: &rest.Config{Host:"https://172.19.2.238:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500/client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500/client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:
[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2480600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 02:42:11.959322    1532 cert_rotation.go:137] Starting client certificate rotation controller
	I0229 02:42:11.968317    1532 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 02:42:11.986929    1532 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0229 02:42:11.986929    1532 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0229 02:42:11.986929    1532 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0229 02:42:11.986929    1532 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0229 02:42:11.986929    1532 command_runner.go:130] >  kind: InitConfiguration
	I0229 02:42:11.986929    1532 command_runner.go:130] >  localAPIEndpoint:
	I0229 02:42:11.986929    1532 command_runner.go:130] > -  advertiseAddress: 172.19.2.252
	I0229 02:42:11.986929    1532 command_runner.go:130] > +  advertiseAddress: 172.19.2.238
	I0229 02:42:11.986929    1532 command_runner.go:130] >    bindPort: 8443
	I0229 02:42:11.986929    1532 command_runner.go:130] >  bootstrapTokens:
	I0229 02:42:11.986929    1532 command_runner.go:130] >    - groups:
	I0229 02:42:11.986929    1532 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0229 02:42:11.986929    1532 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0229 02:42:11.986929    1532 command_runner.go:130] >    name: "multinode-314500"
	I0229 02:42:11.986929    1532 command_runner.go:130] >    kubeletExtraArgs:
	I0229 02:42:11.986929    1532 command_runner.go:130] > -    node-ip: 172.19.2.252
	I0229 02:42:11.986929    1532 command_runner.go:130] > +    node-ip: 172.19.2.238
	I0229 02:42:11.986929    1532 command_runner.go:130] >    taints: []
	I0229 02:42:11.986929    1532 command_runner.go:130] >  ---
	I0229 02:42:11.986929    1532 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0229 02:42:11.986929    1532 command_runner.go:130] >  kind: ClusterConfiguration
	I0229 02:42:11.986929    1532 command_runner.go:130] >  apiServer:
	I0229 02:42:11.986929    1532 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.19.2.252"]
	I0229 02:42:11.986929    1532 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.19.2.238"]
	I0229 02:42:11.986929    1532 command_runner.go:130] >    extraArgs:
	I0229 02:42:11.986929    1532 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0229 02:42:11.987453    1532 command_runner.go:130] >  controllerManager:
	I0229 02:42:11.987637    1532 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.19.2.252
	+  advertiseAddress: 172.19.2.238
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-314500"
	   kubeletExtraArgs:
	-    node-ip: 172.19.2.252
	+    node-ip: 172.19.2.238
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.19.2.252"]
	+  certSANs: ["127.0.0.1", "localhost", "172.19.2.238"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0229 02:42:11.987702    1532 kubeadm.go:1135] stopping kube-system containers ...
	I0229 02:42:11.993619    1532 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 02:42:12.025972    1532 command_runner.go:130] > 72b2d832587c
	I0229 02:42:12.026007    1532 command_runner.go:130] > 5814ae38cea0
	I0229 02:42:12.026007    1532 command_runner.go:130] > e767eb473501
	I0229 02:42:12.026007    1532 command_runner.go:130] > 1993ffe76ae7
	I0229 02:42:12.026007    1532 command_runner.go:130] > b606f60fc884
	I0229 02:42:12.026007    1532 command_runner.go:130] > 341278d602dd
	I0229 02:42:12.026007    1532 command_runner.go:130] > 349bdaee8eb9
	I0229 02:42:12.026007    1532 command_runner.go:130] > b37b7f8a0d78
	I0229 02:42:12.026007    1532 command_runner.go:130] > 02fbddb29c60
	I0229 02:42:12.026007    1532 command_runner.go:130] > ada445c976af
	I0229 02:42:12.026007    1532 command_runner.go:130] > 795e8c684507
	I0229 02:42:12.026007    1532 command_runner.go:130] > f1cb36bcb3f3
	I0229 02:42:12.026007    1532 command_runner.go:130] > 41745010357f
	I0229 02:42:12.026007    1532 command_runner.go:130] > 9d23233978a7
	I0229 02:42:12.026007    1532 command_runner.go:130] > 252fb20145ea
	I0229 02:42:12.026007    1532 command_runner.go:130] > 340bdcfacbe2
	I0229 02:42:12.026007    1532 command_runner.go:130] > 007d6c9a53e1
	I0229 02:42:12.026007    1532 command_runner.go:130] > 11c14ebdfaf6
	I0229 02:42:12.026007    1532 command_runner.go:130] > 8c944d91b625
	I0229 02:42:12.026007    1532 command_runner.go:130] > dd61788b0a0d
	I0229 02:42:12.026007    1532 command_runner.go:130] > c93e33130746
	I0229 02:42:12.026007    1532 command_runner.go:130] > 4b10f8bd940b
	I0229 02:42:12.026007    1532 command_runner.go:130] > edb41bd5e75d
	I0229 02:42:12.026007    1532 command_runner.go:130] > ab0c4864aee5
	I0229 02:42:12.026007    1532 command_runner.go:130] > 26b1ab05f99a
	I0229 02:42:12.026007    1532 command_runner.go:130] > bf7b9750ae9e
	I0229 02:42:12.026007    1532 command_runner.go:130] > 96810146c69c
	I0229 02:42:12.026625    1532 docker.go:483] Stopping containers: [72b2d832587c 5814ae38cea0 e767eb473501 1993ffe76ae7 b606f60fc884 341278d602dd 349bdaee8eb9 b37b7f8a0d78 02fbddb29c60 ada445c976af 795e8c684507 f1cb36bcb3f3 41745010357f 9d23233978a7 252fb20145ea 340bdcfacbe2 007d6c9a53e1 11c14ebdfaf6 8c944d91b625 dd61788b0a0d c93e33130746 4b10f8bd940b edb41bd5e75d ab0c4864aee5 26b1ab05f99a bf7b9750ae9e 96810146c69c]
	I0229 02:42:12.033808    1532 ssh_runner.go:195] Run: docker stop 72b2d832587c 5814ae38cea0 e767eb473501 1993ffe76ae7 b606f60fc884 341278d602dd 349bdaee8eb9 b37b7f8a0d78 02fbddb29c60 ada445c976af 795e8c684507 f1cb36bcb3f3 41745010357f 9d23233978a7 252fb20145ea 340bdcfacbe2 007d6c9a53e1 11c14ebdfaf6 8c944d91b625 dd61788b0a0d c93e33130746 4b10f8bd940b edb41bd5e75d ab0c4864aee5 26b1ab05f99a bf7b9750ae9e 96810146c69c
	I0229 02:42:12.065768    1532 command_runner.go:130] > 72b2d832587c
	I0229 02:42:12.065768    1532 command_runner.go:130] > 5814ae38cea0
	I0229 02:42:12.065835    1532 command_runner.go:130] > e767eb473501
	I0229 02:42:12.065835    1532 command_runner.go:130] > 1993ffe76ae7
	I0229 02:42:12.065866    1532 command_runner.go:130] > b606f60fc884
	I0229 02:42:12.065927    1532 command_runner.go:130] > 341278d602dd
	I0229 02:42:12.065927    1532 command_runner.go:130] > 349bdaee8eb9
	I0229 02:42:12.065998    1532 command_runner.go:130] > b37b7f8a0d78
	I0229 02:42:12.065998    1532 command_runner.go:130] > 02fbddb29c60
	I0229 02:42:12.065998    1532 command_runner.go:130] > ada445c976af
	I0229 02:42:12.065998    1532 command_runner.go:130] > 795e8c684507
	I0229 02:42:12.065998    1532 command_runner.go:130] > f1cb36bcb3f3
	I0229 02:42:12.065998    1532 command_runner.go:130] > 41745010357f
	I0229 02:42:12.065998    1532 command_runner.go:130] > 9d23233978a7
	I0229 02:42:12.065998    1532 command_runner.go:130] > 252fb20145ea
	I0229 02:42:12.065998    1532 command_runner.go:130] > 340bdcfacbe2
	I0229 02:42:12.065998    1532 command_runner.go:130] > 007d6c9a53e1
	I0229 02:42:12.065998    1532 command_runner.go:130] > 11c14ebdfaf6
	I0229 02:42:12.065998    1532 command_runner.go:130] > 8c944d91b625
	I0229 02:42:12.065998    1532 command_runner.go:130] > dd61788b0a0d
	I0229 02:42:12.065998    1532 command_runner.go:130] > c93e33130746
	I0229 02:42:12.065998    1532 command_runner.go:130] > 4b10f8bd940b
	I0229 02:42:12.065998    1532 command_runner.go:130] > edb41bd5e75d
	I0229 02:42:12.065998    1532 command_runner.go:130] > ab0c4864aee5
	I0229 02:42:12.065998    1532 command_runner.go:130] > 26b1ab05f99a
	I0229 02:42:12.065998    1532 command_runner.go:130] > bf7b9750ae9e
	I0229 02:42:12.065998    1532 command_runner.go:130] > 96810146c69c
	I0229 02:42:12.074515    1532 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 02:42:12.112852    1532 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:42:12.130884    1532 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0229 02:42:12.131596    1532 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0229 02:42:12.131596    1532 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0229 02:42:12.131681    1532 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:42:12.131992    1532 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:42:12.140696    1532 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:42:12.158321    1532 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 02:42:12.158321    1532 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:42:12.384565    1532 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:42:12.384565    1532 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0229 02:42:12.384565    1532 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0229 02:42:12.384565    1532 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 02:42:12.384565    1532 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0229 02:42:12.384565    1532 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0229 02:42:12.384565    1532 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0229 02:42:12.384565    1532 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0229 02:42:12.384565    1532 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0229 02:42:12.384565    1532 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 02:42:12.384565    1532 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 02:42:12.384565    1532 command_runner.go:130] > [certs] Using the existing "sa" key
	I0229 02:42:12.384795    1532 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:42:13.350432    1532 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:42:13.350432    1532 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:42:13.350432    1532 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:42:13.350432    1532 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:42:13.350897    1532 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:42:13.350897    1532 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:42:13.661809    1532 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:42:13.661949    1532 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:42:13.661949    1532 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0229 02:42:13.662077    1532 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:42:13.753413    1532 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:42:13.753727    1532 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:42:13.753786    1532 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:42:13.753786    1532 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:42:13.754133    1532 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:42:13.846582    1532 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:42:13.846738    1532 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:42:13.856567    1532 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:42:14.358672    1532 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:42:14.865645    1532 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:42:15.360809    1532 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:42:15.386716    1532 command_runner.go:130] > 1764
	I0229 02:42:15.386772    1532 api_server.go:72] duration metric: took 1.5399844s to wait for apiserver process to appear ...
	I0229 02:42:15.386772    1532 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:42:15.386772    1532 api_server.go:253] Checking apiserver healthz at https://172.19.2.238:8443/healthz ...
	I0229 02:42:18.776359    1532 api_server.go:279] https://172.19.2.238:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 02:42:18.776818    1532 api_server.go:103] status: https://172.19.2.238:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 02:42:18.776871    1532 api_server.go:253] Checking apiserver healthz at https://172.19.2.238:8443/healthz ...
	I0229 02:42:18.889474    1532 api_server.go:279] https://172.19.2.238:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 02:42:18.889720    1532 api_server.go:103] status: https://172.19.2.238:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 02:42:18.889720    1532 api_server.go:253] Checking apiserver healthz at https://172.19.2.238:8443/healthz ...
	I0229 02:42:18.974559    1532 api_server.go:279] https://172.19.2.238:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:42:18.974661    1532 api_server.go:103] status: https://172.19.2.238:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:42:19.394394    1532 api_server.go:253] Checking apiserver healthz at https://172.19.2.238:8443/healthz ...
	I0229 02:42:19.409232    1532 api_server.go:279] https://172.19.2.238:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:42:19.409404    1532 api_server.go:103] status: https://172.19.2.238:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:42:19.901628    1532 api_server.go:253] Checking apiserver healthz at https://172.19.2.238:8443/healthz ...
	I0229 02:42:19.920180    1532 api_server.go:279] https://172.19.2.238:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:42:19.920280    1532 api_server.go:103] status: https://172.19.2.238:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:42:20.391667    1532 api_server.go:253] Checking apiserver healthz at https://172.19.2.238:8443/healthz ...
	I0229 02:42:20.404616    1532 api_server.go:279] https://172.19.2.238:8443/healthz returned 200:
	ok
	I0229 02:42:20.404810    1532 round_trippers.go:463] GET https://172.19.2.238:8443/version
	I0229 02:42:20.404810    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:20.404810    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:20.404810    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:20.418470    1532 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0229 02:42:20.418470    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:20.419082    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:20.419082    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:20.419082    1532 round_trippers.go:580]     Content-Length: 264
	I0229 02:42:20.419082    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:20 GMT
	I0229 02:42:20.419082    1532 round_trippers.go:580]     Audit-Id: 6ed74524-14fd-4ef9-b17c-8ab10ae57111
	I0229 02:42:20.419082    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:20.419082    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:20.419082    1532 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0229 02:42:20.419082    1532 api_server.go:141] control plane version: v1.28.4
	I0229 02:42:20.419082    1532 api_server.go:131] duration metric: took 5.0320295s to wait for apiserver health ...
	I0229 02:42:20.419082    1532 cni.go:84] Creating CNI manager for ""
	I0229 02:42:20.419082    1532 cni.go:136] 2 nodes found, recommending kindnet
	I0229 02:42:20.420136    1532 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0229 02:42:20.431795    1532 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0229 02:42:20.440834    1532 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0229 02:42:20.440834    1532 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0229 02:42:20.440834    1532 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0229 02:42:20.440834    1532 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0229 02:42:20.440834    1532 command_runner.go:130] > Access: 2024-02-29 02:40:58.275316000 +0000
	I0229 02:42:20.440834    1532 command_runner.go:130] > Modify: 2024-02-23 03:39:37.000000000 +0000
	I0229 02:42:20.440834    1532 command_runner.go:130] > Change: 2024-02-29 02:40:46.412000000 +0000
	I0229 02:42:20.440834    1532 command_runner.go:130] >  Birth: -
	I0229 02:42:20.440834    1532 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0229 02:42:20.440834    1532 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0229 02:42:20.485530    1532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0229 02:42:21.725086    1532 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0229 02:42:21.725086    1532 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0229 02:42:21.725086    1532 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0229 02:42:21.725086    1532 command_runner.go:130] > daemonset.apps/kindnet configured
	I0229 02:42:21.725086    1532 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.2394879s)
	I0229 02:42:21.725086    1532 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:42:21.725086    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods
	I0229 02:42:21.725086    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:21.725086    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:21.725086    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:21.730077    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:21.731126    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:21.731126    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:21 GMT
	I0229 02:42:21.731126    1532 round_trippers.go:580]     Audit-Id: 60d19813-251a-4057-8cc9-ce80e3ba7d53
	I0229 02:42:21.731217    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:21.731217    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:21.731258    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:21.731258    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:21.732700    1532 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1924"},"items":[{"metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1910","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 70099 chars]
	I0229 02:42:21.741069    1532 system_pods.go:59] 10 kube-system pods found
	I0229 02:42:21.741069    1532 system_pods.go:61] "coredns-5dd5756b68-8g6tg" [ef7fb259-9f24-4645-9eff-2b16f6789e1b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 02:42:21.741069    1532 system_pods.go:61] "etcd-multinode-314500" [64dda041-1f1d-4866-aa39-62d21bd84e46] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 02:42:21.741069    1532 system_pods.go:61] "kindnet-6r7b8" [402c3ac1-05a9-45f1-aa7d-c0fb8ced6c87] Running
	I0229 02:42:21.741069    1532 system_pods.go:61] "kindnet-t9r77" [4620d417-744c-4049-82ab-79d1ee7f047c] Running
	I0229 02:42:21.741069    1532 system_pods.go:61] "kube-apiserver-multinode-314500" [baa3dc33-6d86-4748-9d57-c64f45dcfbf7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 02:42:21.741069    1532 system_pods.go:61] "kube-controller-manager-multinode-314500" [58e57902-e113-44a9-b5b5-4aba2ba13491] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 02:42:21.741069    1532 system_pods.go:61] "kube-proxy-4gbrl" [accb56cb-79ee-4f16-b05e-91bf554c4a60] Running
	I0229 02:42:21.741069    1532 system_pods.go:61] "kube-proxy-6r6j4" [2b84b22d-3786-4f9e-a23a-c7cfc93bb671] Running
	I0229 02:42:21.741069    1532 system_pods.go:61] "kube-scheduler-multinode-314500" [31fcecc6-17de-43a6-892d-37cd915de64b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 02:42:21.741069    1532 system_pods.go:61] "storage-provisioner" [9780520b-8ff9-408a-ab6f-41b63790ccd1] Running
	I0229 02:42:21.741069    1532 system_pods.go:74] duration metric: took 15.9813ms to wait for pod list to return data ...
	I0229 02:42:21.741069    1532 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:42:21.741069    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes
	I0229 02:42:21.741069    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:21.741069    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:21.741069    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:21.745063    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:21.745117    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:21.745117    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:21 GMT
	I0229 02:42:21.745117    1532 round_trippers.go:580]     Audit-Id: 3ee9f5f0-964d-4cb1-b6f4-93a2cfdfa963
	I0229 02:42:21.745117    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:21.745117    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:21.745117    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:21.745117    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:21.745117    1532 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1924"},"items":[{"metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1902","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 10212 chars]
	I0229 02:42:21.746310    1532 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:42:21.746310    1532 node_conditions.go:123] node cpu capacity is 2
	I0229 02:42:21.746310    1532 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:42:21.746310    1532 node_conditions.go:123] node cpu capacity is 2
	I0229 02:42:21.746310    1532 node_conditions.go:105] duration metric: took 5.2415ms to run NodePressure ...
	I0229 02:42:21.746310    1532 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:42:22.010570    1532 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0229 02:42:22.010667    1532 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0229 02:42:22.010845    1532 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0229 02:42:22.011117    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0229 02:42:22.011150    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:22.011150    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:22.011150    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:22.014930    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:22.014930    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:22.014930    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:22.014930    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:22 GMT
	I0229 02:42:22.014930    1532 round_trippers.go:580]     Audit-Id: 6aa9513e-4fa3-49ec-ad4f-5135a86e3028
	I0229 02:42:22.014930    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:22.014930    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:22.014930    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:22.016321    1532 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1926"},"items":[{"metadata":{"name":"etcd-multinode-314500","namespace":"kube-system","uid":"64dda041-1f1d-4866-aa39-62d21bd84e46","resourceVersion":"1914","creationTimestamp":"2024-02-29T02:42:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.2.238:2379","kubernetes.io/config.hash":"96721f37a1f14642fee9a072efcaa322","kubernetes.io/config.mirror":"96721f37a1f14642fee9a072efcaa322","kubernetes.io/config.seen":"2024-02-29T02:42:14.259019103Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:42:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 29323 chars]
	I0229 02:42:22.017782    1532 kubeadm.go:787] kubelet initialised
	I0229 02:42:22.017782    1532 kubeadm.go:788] duration metric: took 6.906ms waiting for restarted kubelet to initialise ...
	I0229 02:42:22.017857    1532 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:42:22.017933    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods
	I0229 02:42:22.017933    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:22.017933    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:22.017933    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:22.021133    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:22.022164    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:22.022164    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:22.022218    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:22 GMT
	I0229 02:42:22.022218    1532 round_trippers.go:580]     Audit-Id: c8e7b74e-3155-4ddf-a884-675cbb06e3a4
	I0229 02:42:22.022218    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:22.022218    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:22.022242    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:22.023167    1532 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1926"},"items":[{"metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1910","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 70099 chars]
	I0229 02:42:22.026314    1532 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace to be "Ready" ...
	I0229 02:42:22.026314    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:22.026314    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:22.026314    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:22.026314    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:22.038415    1532 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0229 02:42:22.038818    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:22.038852    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:22.038852    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:22.038852    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:22.038852    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:22.038852    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:22 GMT
	I0229 02:42:22.038852    1532 round_trippers.go:580]     Audit-Id: 28657b83-0d1a-4fc5-bbd0-3baa515d71b4
	I0229 02:42:22.039093    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1910","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0229 02:42:22.039849    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:22.039916    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:22.039916    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:22.039980    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:22.043936    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:22.044012    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:22.044012    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:22.044012    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:22.044012    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:22.044064    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:22 GMT
	I0229 02:42:22.044064    1532 round_trippers.go:580]     Audit-Id: 6c161273-6abc-42cd-bffa-b625994414cc
	I0229 02:42:22.044106    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:22.044106    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1902","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5363 chars]
	I0229 02:42:22.044836    1532 pod_ready.go:97] node "multinode-314500" hosting pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-314500" has status "Ready":"False"
	I0229 02:42:22.044878    1532 pod_ready.go:81] duration metric: took 18.5204ms waiting for pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace to be "Ready" ...
	E0229 02:42:22.044878    1532 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-314500" hosting pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-314500" has status "Ready":"False"
	I0229 02:42:22.044878    1532 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:42:22.045009    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-314500
	I0229 02:42:22.045009    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:22.045009    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:22.045009    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:22.047188    1532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:42:22.047188    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:22.047188    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:22.047188    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:22.047188    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:22.047188    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:22 GMT
	I0229 02:42:22.047188    1532 round_trippers.go:580]     Audit-Id: a254cea2-a8f3-4e08-bd95-f983a1439a59
	I0229 02:42:22.047188    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:22.048277    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-314500","namespace":"kube-system","uid":"64dda041-1f1d-4866-aa39-62d21bd84e46","resourceVersion":"1914","creationTimestamp":"2024-02-29T02:42:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.2.238:2379","kubernetes.io/config.hash":"96721f37a1f14642fee9a072efcaa322","kubernetes.io/config.mirror":"96721f37a1f14642fee9a072efcaa322","kubernetes.io/config.seen":"2024-02-29T02:42:14.259019103Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:42:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6077 chars]
	I0229 02:42:22.048805    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:22.048805    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:22.048805    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:22.048805    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:22.051068    1532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:42:22.051068    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:22.051068    1532 round_trippers.go:580]     Audit-Id: e067c836-c1c9-4ccd-a13a-50013a0c48c5
	I0229 02:42:22.051068    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:22.051068    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:22.051068    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:22.051068    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:22.051068    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:22 GMT
	I0229 02:42:22.051800    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1902","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5363 chars]
	I0229 02:42:22.052206    1532 pod_ready.go:97] node "multinode-314500" hosting pod "etcd-multinode-314500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-314500" has status "Ready":"False"
	I0229 02:42:22.052206    1532 pod_ready.go:81] duration metric: took 7.2935ms waiting for pod "etcd-multinode-314500" in "kube-system" namespace to be "Ready" ...
	E0229 02:42:22.052206    1532 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-314500" hosting pod "etcd-multinode-314500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-314500" has status "Ready":"False"
	I0229 02:42:22.052206    1532 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:42:22.052206    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-314500
	I0229 02:42:22.052206    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:22.052206    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:22.052206    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:22.054840    1532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:42:22.054840    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:22.054840    1532 round_trippers.go:580]     Audit-Id: 642c0bd5-feab-43d9-8f8c-367f0da4ceef
	I0229 02:42:22.054840    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:22.054840    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:22.054840    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:22.054840    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:22.054840    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:22 GMT
	I0229 02:42:22.055409    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-314500","namespace":"kube-system","uid":"baa3dc33-6d86-4748-9d57-c64f45dcfbf7","resourceVersion":"1915","creationTimestamp":"2024-02-29T02:42:19Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.2.238:8443","kubernetes.io/config.hash":"a317731b2b94a8e14311676e58d24e16","kubernetes.io/config.mirror":"a317731b2b94a8e14311676e58d24e16","kubernetes.io/config.seen":"2024-02-29T02:42:14.259032504Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:42:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7635 chars]
	I0229 02:42:22.056052    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:22.056052    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:22.056052    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:22.056133    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:22.058339    1532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:42:22.059048    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:22.059048    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:22.059048    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:22.059048    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:22 GMT
	I0229 02:42:22.059048    1532 round_trippers.go:580]     Audit-Id: d60dde48-e0d5-4615-878c-58128b9db24e
	I0229 02:42:22.059048    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:22.059048    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:22.059278    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1902","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5363 chars]
	I0229 02:42:22.059306    1532 pod_ready.go:97] node "multinode-314500" hosting pod "kube-apiserver-multinode-314500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-314500" has status "Ready":"False"
	I0229 02:42:22.059306    1532 pod_ready.go:81] duration metric: took 7.0997ms waiting for pod "kube-apiserver-multinode-314500" in "kube-system" namespace to be "Ready" ...
	E0229 02:42:22.059306    1532 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-314500" hosting pod "kube-apiserver-multinode-314500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-314500" has status "Ready":"False"
	I0229 02:42:22.059306    1532 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:42:22.059306    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-314500
	I0229 02:42:22.059306    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:22.059833    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:22.059833    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:22.065102    1532 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 02:42:22.065102    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:22.065102    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:22.065102    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:22 GMT
	I0229 02:42:22.065102    1532 round_trippers.go:580]     Audit-Id: a842636c-62ee-4fbf-bf84-919d77115bf5
	I0229 02:42:22.065102    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:22.065102    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:22.065102    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:22.065737    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-314500","namespace":"kube-system","uid":"58e57902-e113-44a9-b5b5-4aba2ba13491","resourceVersion":"1913","creationTimestamp":"2024-02-29T02:15:52Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"46f4a0cce9ca64e19c1ad09d6f30ce1e","kubernetes.io/config.mirror":"46f4a0cce9ca64e19c1ad09d6f30ce1e","kubernetes.io/config.seen":"2024-02-29T02:15:52.221398986Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:15:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7433 chars]
	I0229 02:42:22.135186    1532 request.go:629] Waited for 68.6354ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:22.135402    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:22.135402    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:22.135402    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:22.135402    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:22.143700    1532 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0229 02:42:22.144036    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:22.144036    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:22.144036    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:22.144036    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:22.144101    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:22.144101    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:22 GMT
	I0229 02:42:22.144101    1532 round_trippers.go:580]     Audit-Id: 8a1c7a53-a43b-48c8-8780-97b1b1df9400
	I0229 02:42:22.144433    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1902","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5363 chars]
	I0229 02:42:22.145578    1532 pod_ready.go:97] node "multinode-314500" hosting pod "kube-controller-manager-multinode-314500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-314500" has status "Ready":"False"
	I0229 02:42:22.146087    1532 pod_ready.go:81] duration metric: took 86.7764ms waiting for pod "kube-controller-manager-multinode-314500" in "kube-system" namespace to be "Ready" ...
	E0229 02:42:22.146151    1532 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-314500" hosting pod "kube-controller-manager-multinode-314500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-314500" has status "Ready":"False"
	I0229 02:42:22.146151    1532 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4gbrl" in "kube-system" namespace to be "Ready" ...
	I0229 02:42:22.339457    1532 request.go:629] Waited for 193.0973ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4gbrl
	I0229 02:42:22.339653    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4gbrl
	I0229 02:42:22.339653    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:22.339653    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:22.339653    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:22.345531    1532 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 02:42:22.345531    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:22.345531    1532 round_trippers.go:580]     Audit-Id: c7e4abd9-aa06-4b4b-998c-0c0417e17697
	I0229 02:42:22.345531    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:22.345879    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:22.345879    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:22.345879    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:22.345926    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:22 GMT
	I0229 02:42:22.346271    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4gbrl","generateName":"kube-proxy-","namespace":"kube-system","uid":"accb56cb-79ee-4f16-b05e-91bf554c4a60","resourceVersion":"1598","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"99934fe5-0d72-4e83-8f59-4a0b59969008","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"99934fe5-0d72-4e83-8f59-4a0b59969008\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5739 chars]
	I0229 02:42:22.525221    1532 request.go:629] Waited for 178.2303ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.238:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:42:22.525732    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:42:22.525732    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:22.525732    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:22.525833    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:22.529173    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:22.529337    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:22.529337    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:22.529337    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:22 GMT
	I0229 02:42:22.529337    1532 round_trippers.go:580]     Audit-Id: 2c65ff8c-9a66-4b3b-97a2-5ce0fc2d12b9
	I0229 02:42:22.529337    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:22.529337    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:22.529337    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:22.529337    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"2332789d-7280-427a-9644-fc1ffcfc737d","resourceVersion":"1763","creationTimestamp":"2024-02-29T02:35:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:35:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3803 chars]
	I0229 02:42:22.530067    1532 pod_ready.go:92] pod "kube-proxy-4gbrl" in "kube-system" namespace has status "Ready":"True"
	I0229 02:42:22.530103    1532 pod_ready.go:81] duration metric: took 383.8954ms waiting for pod "kube-proxy-4gbrl" in "kube-system" namespace to be "Ready" ...
	I0229 02:42:22.530103    1532 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6r6j4" in "kube-system" namespace to be "Ready" ...
	I0229 02:42:22.730232    1532 request.go:629] Waited for 199.9147ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6r6j4
	I0229 02:42:22.730232    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6r6j4
	I0229 02:42:22.730232    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:22.730232    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:22.730232    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:22.734013    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:22.734098    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:22.734098    1532 round_trippers.go:580]     Audit-Id: dc06718a-460e-4bb1-9493-c7273af18ac9
	I0229 02:42:22.734098    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:22.734098    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:22.734098    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:22.734098    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:22.734098    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:22 GMT
	I0229 02:42:22.734350    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6r6j4","generateName":"kube-proxy-","namespace":"kube-system","uid":"2b84b22d-3786-4f9e-a23a-c7cfc93bb671","resourceVersion":"1923","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"99934fe5-0d72-4e83-8f59-4a0b59969008","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"99934fe5-0d72-4e83-8f59-4a0b59969008\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5735 chars]
	I0229 02:42:22.932928    1532 request.go:629] Waited for 197.892ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:22.933019    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:22.933019    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:22.933019    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:22.933019    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:22.936446    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:22.936446    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:22.936446    1532 round_trippers.go:580]     Audit-Id: 27472a98-f820-4e7e-9e60-bfb3194a0861
	I0229 02:42:22.937170    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:22.937170    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:22.937170    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:22.937170    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:22.937170    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:23 GMT
	I0229 02:42:22.937395    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1902","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5363 chars]
	I0229 02:42:22.937782    1532 pod_ready.go:97] node "multinode-314500" hosting pod "kube-proxy-6r6j4" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-314500" has status "Ready":"False"
	I0229 02:42:22.937782    1532 pod_ready.go:81] duration metric: took 407.6565ms waiting for pod "kube-proxy-6r6j4" in "kube-system" namespace to be "Ready" ...
	E0229 02:42:22.937782    1532 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-314500" hosting pod "kube-proxy-6r6j4" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-314500" has status "Ready":"False"
	I0229 02:42:22.937782    1532 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:42:23.137129    1532 request.go:629] Waited for 199.3361ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-314500
	I0229 02:42:23.137497    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-314500
	I0229 02:42:23.137497    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:23.137497    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:23.137497    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:23.142072    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:23.142072    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:23.142072    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:23.142072    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:23.142072    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:23 GMT
	I0229 02:42:23.142072    1532 round_trippers.go:580]     Audit-Id: 80f19c24-7ea2-4cbc-8d4e-6b035c15a341
	I0229 02:42:23.142072    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:23.143095    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:23.143391    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-314500","namespace":"kube-system","uid":"31fcecc6-17de-43a6-892d-37cd915de64b","resourceVersion":"1912","creationTimestamp":"2024-02-29T02:15:52Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"3d9a79ff068a0922524863a8caa5053a","kubernetes.io/config.mirror":"3d9a79ff068a0922524863a8caa5053a","kubernetes.io/config.seen":"2024-02-29T02:15:52.221399886Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:15:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5145 chars]
	I0229 02:42:23.325677    1532 request.go:629] Waited for 181.4692ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:23.325842    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:23.325915    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:23.325915    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:23.325915    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:23.329726    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:23.330286    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:23.330286    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:23.330286    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:23.330286    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:23.330286    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:23 GMT
	I0229 02:42:23.330286    1532 round_trippers.go:580]     Audit-Id: 52c816cb-1c4b-4d28-affb-b8710b831e6e
	I0229 02:42:23.330286    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:23.330567    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1902","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5363 chars]
	I0229 02:42:23.331090    1532 pod_ready.go:97] node "multinode-314500" hosting pod "kube-scheduler-multinode-314500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-314500" has status "Ready":"False"
	I0229 02:42:23.331090    1532 pod_ready.go:81] duration metric: took 393.2863ms waiting for pod "kube-scheduler-multinode-314500" in "kube-system" namespace to be "Ready" ...
	E0229 02:42:23.331090    1532 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-314500" hosting pod "kube-scheduler-multinode-314500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-314500" has status "Ready":"False"
	I0229 02:42:23.331090    1532 pod_ready.go:38] duration metric: took 1.3131601s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:42:23.331196    1532 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 02:42:23.349657    1532 command_runner.go:130] > -16
	I0229 02:42:23.350152    1532 ops.go:34] apiserver oom_adj: -16
	I0229 02:42:23.350152    1532 kubeadm.go:640] restartCluster took 11.4364638s
	I0229 02:42:23.350152    1532 kubeadm.go:406] StartCluster complete in 11.5031419s
	I0229 02:42:23.350152    1532 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:42:23.350471    1532 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 02:42:23.351676    1532 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:42:23.353077    1532 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 02:42:23.353077    1532 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 02:42:23.354127    1532 out.go:177] * Enabled addons: 
	I0229 02:42:23.353471    1532 config.go:182] Loaded profile config "multinode-314500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 02:42:23.354676    1532 addons.go:505] enable addons completed in 1.823ms: enabled=[]
	I0229 02:42:23.364536    1532 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 02:42:23.365537    1532 kapi.go:59] client config for multinode-314500: &rest.Config{Host:"https://172.19.2.238:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2480600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 02:42:23.366081    1532 cert_rotation.go:137] Starting client certificate rotation controller
	I0229 02:42:23.366081    1532 round_trippers.go:463] GET https://172.19.2.238:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0229 02:42:23.366081    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:23.366081    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:23.366081    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:23.380766    1532 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0229 02:42:23.380766    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:23.380766    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:23.380766    1532 round_trippers.go:580]     Content-Length: 292
	I0229 02:42:23.380766    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:23 GMT
	I0229 02:42:23.380766    1532 round_trippers.go:580]     Audit-Id: c8eb4613-a2b5-4a69-afd6-78803dddbef0
	I0229 02:42:23.380766    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:23.380766    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:23.380766    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:23.380766    1532 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b4cd7015-a823-43da-bf82-ae91c5678262","resourceVersion":"1925","creationTimestamp":"2024-02-29T02:15:51Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0229 02:42:23.380766    1532 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-314500" context rescaled to 1 replicas
	I0229 02:42:23.380766    1532 start.go:223] Will wait 6m0s for node &{Name: IP:172.19.2.238 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 02:42:23.381767    1532 out.go:177] * Verifying Kubernetes components...
	I0229 02:42:23.393463    1532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:42:23.489211    1532 command_runner.go:130] > apiVersion: v1
	I0229 02:42:23.489270    1532 command_runner.go:130] > data:
	I0229 02:42:23.489331    1532 command_runner.go:130] >   Corefile: |
	I0229 02:42:23.489331    1532 command_runner.go:130] >     .:53 {
	I0229 02:42:23.489331    1532 command_runner.go:130] >         log
	I0229 02:42:23.489331    1532 command_runner.go:130] >         errors
	I0229 02:42:23.489389    1532 command_runner.go:130] >         health {
	I0229 02:42:23.489389    1532 command_runner.go:130] >            lameduck 5s
	I0229 02:42:23.489389    1532 command_runner.go:130] >         }
	I0229 02:42:23.489389    1532 command_runner.go:130] >         ready
	I0229 02:42:23.489389    1532 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0229 02:42:23.489467    1532 command_runner.go:130] >            pods insecure
	I0229 02:42:23.489467    1532 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0229 02:42:23.489467    1532 command_runner.go:130] >            ttl 30
	I0229 02:42:23.489467    1532 command_runner.go:130] >         }
	I0229 02:42:23.489537    1532 command_runner.go:130] >         prometheus :9153
	I0229 02:42:23.489537    1532 command_runner.go:130] >         hosts {
	I0229 02:42:23.489537    1532 command_runner.go:130] >            172.19.0.1 host.minikube.internal
	I0229 02:42:23.489537    1532 command_runner.go:130] >            fallthrough
	I0229 02:42:23.489599    1532 command_runner.go:130] >         }
	I0229 02:42:23.489599    1532 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0229 02:42:23.489658    1532 command_runner.go:130] >            max_concurrent 1000
	I0229 02:42:23.489702    1532 command_runner.go:130] >         }
	I0229 02:42:23.489702    1532 command_runner.go:130] >         cache 30
	I0229 02:42:23.489743    1532 command_runner.go:130] >         loop
	I0229 02:42:23.489743    1532 command_runner.go:130] >         reload
	I0229 02:42:23.489789    1532 command_runner.go:130] >         loadbalance
	I0229 02:42:23.489829    1532 command_runner.go:130] >     }
	I0229 02:42:23.489829    1532 command_runner.go:130] > kind: ConfigMap
	I0229 02:42:23.489829    1532 command_runner.go:130] > metadata:
	I0229 02:42:23.489884    1532 command_runner.go:130] >   creationTimestamp: "2024-02-29T02:15:51Z"
	I0229 02:42:23.489884    1532 command_runner.go:130] >   name: coredns
	I0229 02:42:23.489929    1532 command_runner.go:130] >   namespace: kube-system
	I0229 02:42:23.489929    1532 command_runner.go:130] >   resourceVersion: "388"
	I0229 02:42:23.489979    1532 command_runner.go:130] >   uid: 3fc93d17-14a4-4d49-9f77-f2cd8adceaed
	I0229 02:42:23.490114    1532 node_ready.go:35] waiting up to 6m0s for node "multinode-314500" to be "Ready" ...
	I0229 02:42:23.490114    1532 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0229 02:42:23.529924    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:23.529995    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:23.529995    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:23.530050    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:23.533609    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:23.534454    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:23.534536    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:23.534536    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:23.534536    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:23 GMT
	I0229 02:42:23.534536    1532 round_trippers.go:580]     Audit-Id: 862974af-13fa-4a05-b555-4e71dc715f88
	I0229 02:42:23.534536    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:23.534536    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:23.534772    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1902","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5363 chars]
	I0229 02:42:23.998269    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:23.998346    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:23.998346    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:23.998346    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:24.002747    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:24.002747    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:24.002747    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:24.002747    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:24.002747    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:24.002841    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:24 GMT
	I0229 02:42:24.002841    1532 round_trippers.go:580]     Audit-Id: 59e18ea3-9875-4258-9a10-bef54a6f56dd
	I0229 02:42:24.002841    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:24.002841    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1902","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5363 chars]
	I0229 02:42:24.504043    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:24.504043    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:24.504043    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:24.504043    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:24.508326    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:24.508326    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:24.508326    1532 round_trippers.go:580]     Audit-Id: 7eac6d18-fde4-4810-9104-04c075b98e0e
	I0229 02:42:24.508326    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:24.508326    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:24.508326    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:24.508326    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:24.508326    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:24 GMT
	I0229 02:42:24.509269    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1902","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5363 chars]
	I0229 02:42:25.004336    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:25.004466    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:25.004466    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:25.004466    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:25.008653    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:25.008653    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:25.008653    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:25 GMT
	I0229 02:42:25.008653    1532 round_trippers.go:580]     Audit-Id: e316f9b3-0410-4ab1-987e-85c8a68567d1
	I0229 02:42:25.008653    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:25.008653    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:25.008653    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:25.008653    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:25.009389    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1902","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5363 chars]
	I0229 02:42:25.503455    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:25.503455    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:25.503455    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:25.503455    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:25.508188    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:25.508734    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:25.508734    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:25.508734    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:25 GMT
	I0229 02:42:25.508734    1532 round_trippers.go:580]     Audit-Id: 70605347-1d6e-4994-b67b-168436956b75
	I0229 02:42:25.508734    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:25.508734    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:25.508820    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:25.508980    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1902","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5363 chars]
	I0229 02:42:25.509085    1532 node_ready.go:58] node "multinode-314500" has status "Ready":"False"
	I0229 02:42:26.002981    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:26.002981    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:26.002981    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:26.002981    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:26.007390    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:26.007390    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:26.007390    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:26.007390    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:26.007390    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:26 GMT
	I0229 02:42:26.007390    1532 round_trippers.go:580]     Audit-Id: 1ded61c7-8344-414c-93c5-c7aa4655c793
	I0229 02:42:26.007390    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:26.007390    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:26.008752    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1902","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5363 chars]
	I0229 02:42:26.502677    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:26.502677    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:26.502677    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:26.502677    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:26.507399    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:26.507399    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:26.507399    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:26.507399    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:26.507399    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:26.507399    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:26 GMT
	I0229 02:42:26.507399    1532 round_trippers.go:580]     Audit-Id: 275954e1-e267-47b1-8de1-06409f9dc777
	I0229 02:42:26.507399    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:26.508273    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1902","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5363 chars]
	I0229 02:42:27.000090    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:27.000090    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:27.000192    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:27.000192    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:27.004613    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:27.004613    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:27.004696    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:27 GMT
	I0229 02:42:27.004696    1532 round_trippers.go:580]     Audit-Id: 47af4e67-b91f-4b04-af66-916b45057dad
	I0229 02:42:27.004696    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:27.004696    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:27.004696    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:27.004696    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:27.005145    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1902","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5363 chars]
	I0229 02:42:27.498722    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:27.498722    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:27.498722    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:27.498722    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:27.503044    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:27.503044    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:27.503044    1532 round_trippers.go:580]     Audit-Id: 0beaac30-6531-4e36-a77c-5a4c1f201f9c
	I0229 02:42:27.503393    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:27.503393    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:27.503393    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:27.503393    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:27.503393    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:27 GMT
	I0229 02:42:27.503803    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1902","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5363 chars]
	I0229 02:42:27.999550    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:27.999775    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:27.999775    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:27.999775    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:28.002880    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:28.003787    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:28.003787    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:28.003787    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:28.003787    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:28.003787    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:28.003787    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:28 GMT
	I0229 02:42:28.003787    1532 round_trippers.go:580]     Audit-Id: 417fbc2b-acf9-4a24-ab97-9645f8a68925
	I0229 02:42:28.004297    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1902","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5363 chars]
	I0229 02:42:28.004871    1532 node_ready.go:58] node "multinode-314500" has status "Ready":"False"
	I0229 02:42:28.501125    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:28.501215    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:28.501215    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:28.501215    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:28.506245    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:28.506245    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:28.506245    1532 round_trippers.go:580]     Audit-Id: 7601a368-12cb-4605-9dd1-7b8b7ac96907
	I0229 02:42:28.506245    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:28.506245    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:28.506245    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:28.506245    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:28.506245    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:28 GMT
	I0229 02:42:28.506245    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1902","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5363 chars]
	I0229 02:42:29.000605    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:29.000814    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:29.000814    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:29.000814    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:29.006123    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:29.006123    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:29.006123    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:29.006123    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:29.006123    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:29.006123    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:29 GMT
	I0229 02:42:29.006123    1532 round_trippers.go:580]     Audit-Id: 5b3c1156-f827-4416-83da-1ab62fce6470
	I0229 02:42:29.006123    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:29.006123    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1902","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5363 chars]
	I0229 02:42:29.501803    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:29.501803    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:29.501803    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:29.501803    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:29.506266    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:29.506648    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:29.506648    1532 round_trippers.go:580]     Audit-Id: c7b406ea-c37a-4e32-ab6e-98f2c844d01f
	I0229 02:42:29.506648    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:29.506648    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:29.506648    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:29.506648    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:29.506648    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:29 GMT
	I0229 02:42:29.506886    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:29.507314    1532 node_ready.go:49] node "multinode-314500" has status "Ready":"True"
	I0229 02:42:29.507380    1532 node_ready.go:38] duration metric: took 6.0169309s waiting for node "multinode-314500" to be "Ready" ...
	I0229 02:42:29.507380    1532 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:42:29.507523    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods
	I0229 02:42:29.507594    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:29.507594    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:29.507594    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:29.512860    1532 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 02:42:29.512860    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:29.512860    1532 round_trippers.go:580]     Audit-Id: f3bd73d8-9ae2-4d3d-82db-c37f590b81aa
	I0229 02:42:29.512860    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:29.512860    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:29.512860    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:29.512860    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:29.512860    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:29 GMT
	I0229 02:42:29.514654    1532 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2004"},"items":[{"metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1910","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 70099 chars]
	I0229 02:42:29.517593    1532 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace to be "Ready" ...
	I0229 02:42:29.517746    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:29.517746    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:29.517746    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:29.517816    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:29.520487    1532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:42:29.520487    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:29.520487    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:29 GMT
	I0229 02:42:29.520487    1532 round_trippers.go:580]     Audit-Id: 3a690a89-6002-415c-8d4a-f87b6db67c13
	I0229 02:42:29.521126    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:29.521126    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:29.521126    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:29.521126    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:29.521321    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1910","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0229 02:42:29.522008    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:29.522008    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:29.522008    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:29.522072    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:29.524285    1532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:42:29.524285    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:29.525278    1532 round_trippers.go:580]     Audit-Id: 1f7863c5-64b9-46ae-b389-e3f3c5598660
	I0229 02:42:29.525278    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:29.525278    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:29.525278    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:29.525278    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:29.525278    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:29 GMT
	I0229 02:42:29.525773    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:30.019790    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:30.020098    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:30.020098    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:30.020098    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:30.024370    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:30.024761    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:30.024761    1532 round_trippers.go:580]     Audit-Id: 6346db92-6740-443a-9869-65aef1977379
	I0229 02:42:30.024761    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:30.024761    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:30.024761    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:30.024761    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:30.024839    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:30 GMT
	I0229 02:42:30.025000    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1910","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0229 02:42:30.025629    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:30.025739    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:30.025739    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:30.025739    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:30.029100    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:30.029399    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:30.029399    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:30.029399    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:30.029399    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:30.029399    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:30.029440    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:30 GMT
	I0229 02:42:30.029440    1532 round_trippers.go:580]     Audit-Id: 2f92f838-dd56-4242-9738-4c6904e99a84
	I0229 02:42:30.030285    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:30.520248    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:30.520326    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:30.520326    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:30.520326    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:30.524091    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:30.525147    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:30.525147    1532 round_trippers.go:580]     Audit-Id: 08c0e788-9a7e-4971-9d77-7d8299956494
	I0229 02:42:30.525147    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:30.525147    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:30.525147    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:30.525147    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:30.525147    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:30 GMT
	I0229 02:42:30.525794    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1910","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0229 02:42:30.526485    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:30.526485    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:30.526485    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:30.526485    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:30.530646    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:30.530646    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:30.530646    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:30.530646    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:30.530646    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:30.530646    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:30.530646    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:30 GMT
	I0229 02:42:30.530646    1532 round_trippers.go:580]     Audit-Id: 5884cbe8-7c78-4dc5-bf2a-98fef91872d8
	I0229 02:42:30.530646    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:31.021909    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:31.022164    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:31.022164    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:31.022164    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:31.030657    1532 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0229 02:42:31.030657    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:31.030657    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:31.030657    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:31.030657    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:31 GMT
	I0229 02:42:31.030657    1532 round_trippers.go:580]     Audit-Id: 8d2402d7-2652-42e8-8ab7-990335c3698b
	I0229 02:42:31.030657    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:31.030657    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:31.031486    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1910","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0229 02:42:31.032393    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:31.032463    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:31.032463    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:31.032494    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:31.035776    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:31.035921    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:31.035966    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:31 GMT
	I0229 02:42:31.035966    1532 round_trippers.go:580]     Audit-Id: de5d2437-e168-4a29-be3d-639a627ab403
	I0229 02:42:31.035966    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:31.035966    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:31.035966    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:31.035966    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:31.036246    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:31.524118    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:31.524118    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:31.524118    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:31.524118    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:31.528263    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:31.529075    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:31.529075    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:31.529195    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:31.529195    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:31 GMT
	I0229 02:42:31.529195    1532 round_trippers.go:580]     Audit-Id: fba87570-e18e-4ed5-8b45-b28fe236ff01
	I0229 02:42:31.529195    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:31.529195    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:31.529568    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1910","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0229 02:42:31.530467    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:31.530567    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:31.530600    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:31.530600    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:31.536861    1532 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 02:42:31.537014    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:31.537041    1532 round_trippers.go:580]     Audit-Id: 32984390-bfee-4275-a455-3fb34366d49f
	I0229 02:42:31.537041    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:31.537041    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:31.537041    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:31.537041    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:31.537098    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:31 GMT
	I0229 02:42:31.537098    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:31.537098    1532 pod_ready.go:102] pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace has status "Ready":"False"
	I0229 02:42:32.029786    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:32.029786    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:32.029786    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:32.029786    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:32.033662    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:32.033841    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:32.033841    1532 round_trippers.go:580]     Audit-Id: 7d7d392c-3c6a-4f38-8dfb-b5b932c569e9
	I0229 02:42:32.033841    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:32.033841    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:32.033841    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:32.033841    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:32.033841    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:32 GMT
	I0229 02:42:32.033841    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1910","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0229 02:42:32.034903    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:32.034963    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:32.034963    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:32.034963    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:32.038322    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:32.038322    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:32.038322    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:32.038322    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:32.038322    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:32.038322    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:32 GMT
	I0229 02:42:32.038322    1532 round_trippers.go:580]     Audit-Id: 81eb5156-9c6f-4dec-9354-09c6fa77ddbe
	I0229 02:42:32.038322    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:32.039710    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:32.531084    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:32.531154    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:32.531154    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:32.531154    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:32.534723    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:32.535740    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:32.535740    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:32.535740    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:32.535740    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:32 GMT
	I0229 02:42:32.535740    1532 round_trippers.go:580]     Audit-Id: dd7b7952-4800-41b2-8403-410a2000bec5
	I0229 02:42:32.535740    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:32.535740    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:32.536001    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1910","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0229 02:42:32.536715    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:32.536715    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:32.536715    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:32.536715    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:32.539187    1532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:42:32.540291    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:32.540291    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:32.540291    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:32.540291    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:32.540291    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:32.540291    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:32 GMT
	I0229 02:42:32.540291    1532 round_trippers.go:580]     Audit-Id: fcdb4160-096c-4fe7-9717-4c89d05bc4ea
	I0229 02:42:32.540568    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:33.032248    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:33.032371    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:33.032371    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:33.032371    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:33.039719    1532 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 02:42:33.039809    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:33.039809    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:33.039847    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:33 GMT
	I0229 02:42:33.039847    1532 round_trippers.go:580]     Audit-Id: 87bed496-4ded-41f2-a9ec-d02cf0b76476
	I0229 02:42:33.039847    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:33.039847    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:33.039847    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:33.040100    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1910","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0229 02:42:33.040299    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:33.040299    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:33.040299    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:33.040299    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:33.045103    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:33.045240    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:33.045240    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:33.045240    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:33.045240    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:33 GMT
	I0229 02:42:33.045240    1532 round_trippers.go:580]     Audit-Id: 450ab1d0-e519-4f5d-84a1-a1d8355adf3b
	I0229 02:42:33.045296    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:33.045296    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:33.045296    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:33.518802    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:33.518899    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:33.518899    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:33.518985    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:33.522257    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:33.522257    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:33.523290    1532 round_trippers.go:580]     Audit-Id: c047f467-224d-4666-a078-e0a25b3b53df
	I0229 02:42:33.523290    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:33.523290    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:33.523290    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:33.523352    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:33.523352    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:33 GMT
	I0229 02:42:33.523631    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1910","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0229 02:42:33.524477    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:33.524477    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:33.524567    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:33.524567    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:33.527742    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:33.528276    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:33.528276    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:33 GMT
	I0229 02:42:33.528276    1532 round_trippers.go:580]     Audit-Id: ac79d203-bdaa-476a-8f2e-daacb125a68b
	I0229 02:42:33.528276    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:33.528276    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:33.528276    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:33.528276    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:33.529115    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:34.021701    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:34.021701    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:34.021701    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:34.021701    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:34.026232    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:34.026687    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:34.026687    1532 round_trippers.go:580]     Audit-Id: c70140be-5ead-4862-b127-051263028635
	I0229 02:42:34.026687    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:34.026687    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:34.026687    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:34.026687    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:34.026687    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:34 GMT
	I0229 02:42:34.026924    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1910","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0229 02:42:34.027672    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:34.027672    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:34.027737    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:34.027737    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:34.030873    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:34.030873    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:34.030873    1532 round_trippers.go:580]     Audit-Id: 3e6de93e-a219-410c-9951-273886722341
	I0229 02:42:34.030873    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:34.030873    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:34.030873    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:34.030873    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:34.031549    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:34 GMT
	I0229 02:42:34.031832    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:34.032241    1532 pod_ready.go:102] pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace has status "Ready":"False"
	I0229 02:42:34.524030    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:34.524083    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:34.524083    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:34.524083    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:34.527411    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:34.527411    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:34.527411    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:34.527411    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:34.527411    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:34.527411    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:34 GMT
	I0229 02:42:34.527411    1532 round_trippers.go:580]     Audit-Id: b9142a36-7c00-4bab-952d-6b9f7cb79e29
	I0229 02:42:34.527411    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:34.527411    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1910","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0229 02:42:34.528434    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:34.528434    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:34.528434    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:34.528434    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:34.535267    1532 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 02:42:34.535267    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:34.535267    1532 round_trippers.go:580]     Audit-Id: 2f08c29d-179a-4dce-b0af-993dade1f3e7
	I0229 02:42:34.535267    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:34.535267    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:34.535267    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:34.535267    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:34.535267    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:34 GMT
	I0229 02:42:34.535836    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:35.030605    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:35.030819    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:35.030819    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:35.030819    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:35.035127    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:35.035492    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:35.035492    1532 round_trippers.go:580]     Audit-Id: 396c49e8-19e3-4255-b7d0-3a1f3a00c8ec
	I0229 02:42:35.035492    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:35.035492    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:35.035492    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:35.035492    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:35.035602    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:35 GMT
	I0229 02:42:35.035807    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1910","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0229 02:42:35.036622    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:35.036694    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:35.036694    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:35.036694    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:35.039545    1532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:42:35.040548    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:35.040589    1532 round_trippers.go:580]     Audit-Id: c601e285-8c60-497c-b518-7a3a106f2fee
	I0229 02:42:35.040589    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:35.040589    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:35.040589    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:35.040589    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:35.040589    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:35 GMT
	I0229 02:42:35.041446    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:35.521774    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:35.521774    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:35.521774    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:35.521774    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:35.528372    1532 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 02:42:35.528372    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:35.528372    1532 round_trippers.go:580]     Audit-Id: b157fd61-caaa-480c-bd98-45de216fa95b
	I0229 02:42:35.528372    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:35.528372    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:35.528372    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:35.528372    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:35.528372    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:35 GMT
	I0229 02:42:35.529135    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1910","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0229 02:42:35.529288    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:35.529288    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:35.529288    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:35.529826    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:35.534024    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:35.534024    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:35.534024    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:35.535019    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:35.535019    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:35 GMT
	I0229 02:42:35.535019    1532 round_trippers.go:580]     Audit-Id: 90c35cba-44f9-4229-8415-bb2ea1b87a4f
	I0229 02:42:35.535019    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:35.535019    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:35.535019    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:36.029046    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:36.029046    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:36.029046    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:36.029046    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:36.036630    1532 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 02:42:36.036630    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:36.036630    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:36.036630    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:36.036630    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:36.037243    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:36 GMT
	I0229 02:42:36.037243    1532 round_trippers.go:580]     Audit-Id: 58ede345-7c6c-46d1-b459-12404cf70f2c
	I0229 02:42:36.037243    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:36.037540    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1910","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0229 02:42:36.038405    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:36.038478    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:36.038478    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:36.038478    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:36.041883    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:36.041940    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:36.041940    1532 round_trippers.go:580]     Audit-Id: f3e4f74c-7f1f-4713-9415-b3c64b8aa292
	I0229 02:42:36.041940    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:36.041940    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:36.041940    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:36.041940    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:36.041940    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:36 GMT
	I0229 02:42:36.041940    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:36.042555    1532 pod_ready.go:102] pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace has status "Ready":"False"
	I0229 02:42:36.528946    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:36.529029    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:36.529114    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:36.529114    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:36.534550    1532 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 02:42:36.535312    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:36.535312    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:36.535312    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:36.535312    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:36 GMT
	I0229 02:42:36.535312    1532 round_trippers.go:580]     Audit-Id: 1e8e32f9-7249-4e56-b511-dae41b4c6157
	I0229 02:42:36.535384    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:36.535384    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:36.535384    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1910","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0229 02:42:36.536307    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:36.536307    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:36.536387    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:36.536387    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:36.540591    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:36.540591    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:36.540591    1532 round_trippers.go:580]     Audit-Id: bfb336af-faee-4b1c-9809-ea5acbe6bba0
	I0229 02:42:36.540591    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:36.540591    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:36.540591    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:36.540591    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:36.540591    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:36 GMT
	I0229 02:42:36.541713    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:37.019484    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:37.019484    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:37.019484    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:37.019484    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:37.023632    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:37.023632    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:37.023632    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:37.023632    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:37.023632    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:37.023632    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:37 GMT
	I0229 02:42:37.023632    1532 round_trippers.go:580]     Audit-Id: 8b1fc6ab-9730-4d9b-a6ff-bfdde8a3d806
	I0229 02:42:37.023632    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:37.023872    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"2031","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:42:37.024515    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:37.024589    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:37.024589    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:37.024589    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:37.031940    1532 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 02:42:37.031940    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:37.031940    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:37.031940    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:37.031940    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:37 GMT
	I0229 02:42:37.031940    1532 round_trippers.go:580]     Audit-Id: 21d8f902-efbd-41e9-9ad0-eb61b3d23b7c
	I0229 02:42:37.031940    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:37.031940    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:37.033437    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:37.519675    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:37.519754    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:37.519823    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:37.519823    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:37.524898    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:37.524898    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:37.524898    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:37.525018    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:37.525018    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:37.525018    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:37 GMT
	I0229 02:42:37.525018    1532 round_trippers.go:580]     Audit-Id: 16c8e1f2-72a2-44c0-81fe-aa312a1ca737
	I0229 02:42:37.525018    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:37.525098    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"2031","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:42:37.526673    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:37.526778    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:37.526778    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:37.526778    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:37.530156    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:37.530242    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:37.530242    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:37.530242    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:37.530242    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:37.530242    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:37.530242    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:37 GMT
	I0229 02:42:37.530242    1532 round_trippers.go:580]     Audit-Id: e52216bf-67ea-48c2-8659-a24678cb5ce9
	I0229 02:42:37.530310    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:38.030928    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:38.030928    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:38.030928    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:38.031008    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:38.035661    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:38.036008    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:38.036008    1532 round_trippers.go:580]     Audit-Id: 1a4a89bf-9660-4e3e-b504-e85e3fe404fa
	I0229 02:42:38.036008    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:38.036008    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:38.036008    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:38.036008    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:38.036008    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:38 GMT
	I0229 02:42:38.036104    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"2031","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:42:38.036853    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:38.036853    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:38.036853    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:38.036853    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:38.042943    1532 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 02:42:38.042943    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:38.042943    1532 round_trippers.go:580]     Audit-Id: c9f62ff4-fc90-482d-96d1-6290e4b39646
	I0229 02:42:38.042943    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:38.042943    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:38.042943    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:38.042943    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:38.042943    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:38 GMT
	I0229 02:42:38.042943    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:38.044014    1532 pod_ready.go:102] pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace has status "Ready":"False"
	I0229 02:42:38.529355    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:38.529355    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:38.529355    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:38.529355    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:38.533944    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:38.533944    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:38.533944    1532 round_trippers.go:580]     Audit-Id: e33e9a76-c8f5-4bbd-868c-f0aec7fe2878
	I0229 02:42:38.533944    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:38.533944    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:38.533944    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:38.533944    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:38.533944    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:38 GMT
	I0229 02:42:38.533944    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"2031","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:42:38.535572    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:38.535572    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:38.535572    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:38.535572    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:38.542454    1532 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 02:42:38.542454    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:38.542454    1532 round_trippers.go:580]     Audit-Id: 51086a11-9c55-4d95-85f4-58c0d963ec3d
	I0229 02:42:38.542454    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:38.542454    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:38.542454    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:38.542454    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:38.542454    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:38 GMT
	I0229 02:42:38.542454    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:39.030447    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:39.030447    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:39.030447    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:39.030447    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:39.039091    1532 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0229 02:42:39.039091    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:39.039091    1532 round_trippers.go:580]     Audit-Id: ce584817-f37f-4215-aecf-fc5a309af975
	I0229 02:42:39.039091    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:39.039091    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:39.039091    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:39.039091    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:39.039091    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:39 GMT
	I0229 02:42:39.040056    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"2031","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:42:39.040743    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:39.040773    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:39.040816    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:39.040816    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:39.044050    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:39.044050    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:39.044050    1532 round_trippers.go:580]     Audit-Id: d08d8fc9-4488-470a-aeab-1e2e99ed1321
	I0229 02:42:39.044050    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:39.044050    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:39.044050    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:39.044050    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:39.044050    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:39 GMT
	I0229 02:42:39.044050    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:39.531894    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:39.531991    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:39.531991    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:39.531991    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:39.536240    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:39.536240    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:39.536240    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:39.536240    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:39.536240    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:39 GMT
	I0229 02:42:39.536240    1532 round_trippers.go:580]     Audit-Id: ebaa88d6-8655-4a5f-823e-98bc188656b9
	I0229 02:42:39.536240    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:39.536240    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:39.536240    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"2031","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:42:39.537421    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:39.537421    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:39.537511    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:39.537511    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:39.541717    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:39.541717    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:39.542338    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:39 GMT
	I0229 02:42:39.542338    1532 round_trippers.go:580]     Audit-Id: dfa7c0d0-2580-4b87-a71a-2241acd67772
	I0229 02:42:39.542338    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:39.542338    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:39.542338    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:39.542338    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:39.542606    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:40.030616    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:40.030691    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:40.030691    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:40.030691    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:40.035389    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:40.036010    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:40.036010    1532 round_trippers.go:580]     Audit-Id: 771cb54f-07c0-4668-b4c0-cd16519abad3
	I0229 02:42:40.036010    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:40.036010    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:40.036010    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:40.036010    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:40.036010    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:40 GMT
	I0229 02:42:40.036079    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"2031","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:42:40.036952    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:40.036952    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:40.036952    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:40.036952    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:40.040210    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:40.040210    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:40.040210    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:40.040388    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:40.040388    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:40.040388    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:40.040388    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:40 GMT
	I0229 02:42:40.040388    1532 round_trippers.go:580]     Audit-Id: 6cca2caf-9345-4430-93c3-659dbda40622
	I0229 02:42:40.040628    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:40.532692    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:40.532783    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:40.532783    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:40.532872    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:40.541137    1532 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0229 02:42:40.541137    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:40.541137    1532 round_trippers.go:580]     Audit-Id: bd5c62d2-b684-48f3-bae3-9e6e2223253c
	I0229 02:42:40.541137    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:40.541137    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:40.541137    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:40.541137    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:40.541137    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:40 GMT
	I0229 02:42:40.541137    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"2031","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:42:40.542766    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:40.542876    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:40.542876    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:40.542924    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:40.546144    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:40.546144    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:40.546144    1532 round_trippers.go:580]     Audit-Id: a5183745-8f41-4a9c-94db-83112b9cf49d
	I0229 02:42:40.546144    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:40.546144    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:40.546144    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:40.546385    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:40.546385    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:40 GMT
	I0229 02:42:40.546385    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:40.547186    1532 pod_ready.go:102] pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace has status "Ready":"False"
	I0229 02:42:41.032638    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:41.032801    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:41.032801    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:41.032801    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:41.036687    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:41.036687    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:41.036687    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:41.036687    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:41.037192    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:41.037192    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:41 GMT
	I0229 02:42:41.037192    1532 round_trippers.go:580]     Audit-Id: 80ed4ff2-7723-4c13-b7bc-08042779f65d
	I0229 02:42:41.037192    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:41.037192    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"2031","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:42:41.038162    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:41.038235    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:41.038235    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:41.038235    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:41.042811    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:41.043200    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:41.043200    1532 round_trippers.go:580]     Audit-Id: 4276d5fa-8cdd-4ea3-8b9f-fe30327d82b6
	I0229 02:42:41.043200    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:41.043200    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:41.043200    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:41.043200    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:41.043200    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:41 GMT
	I0229 02:42:41.043555    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:41.533653    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:41.533653    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:41.533653    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:41.533653    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:41.538663    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:41.538663    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:41.538663    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:41 GMT
	I0229 02:42:41.538663    1532 round_trippers.go:580]     Audit-Id: 3f2def52-2e7e-4353-9ada-d3ad35c7461c
	I0229 02:42:41.538663    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:41.538768    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:41.538768    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:41.538768    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:41.538988    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"2031","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:42:41.539742    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:41.539742    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:41.539742    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:41.539742    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:41.543811    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:41.543811    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:41.543811    1532 round_trippers.go:580]     Audit-Id: 0fd88b12-29b9-4a0e-a7f3-40debbf2b3ba
	I0229 02:42:41.543811    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:41.543811    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:41.543811    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:41.544264    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:41.544264    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:41 GMT
	I0229 02:42:41.544341    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:42.034997    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:42.035093    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:42.035093    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:42.035093    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:42.038458    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:42.038458    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:42.039468    1532 round_trippers.go:580]     Audit-Id: a2698305-b046-4012-8e87-8bf79993d2c6
	I0229 02:42:42.039468    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:42.039580    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:42.039580    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:42.039580    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:42.039580    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:42 GMT
	I0229 02:42:42.039792    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"2031","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:42:42.040437    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:42.040508    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:42.040508    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:42.040508    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:42.043800    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:42.044048    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:42.044048    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:42.044048    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:42.044048    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:42.044048    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:42.044048    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:42 GMT
	I0229 02:42:42.044165    1532 round_trippers.go:580]     Audit-Id: 6fae7c0f-f838-4dfd-9a12-912702dc137e
	I0229 02:42:42.044394    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:42.522752    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:42.522752    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:42.522752    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:42.522752    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:42.528729    1532 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 02:42:42.528729    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:42.528729    1532 round_trippers.go:580]     Audit-Id: 9f9f6ff5-a0c3-488a-ad5f-a7655ec073ad
	I0229 02:42:42.528729    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:42.528729    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:42.528729    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:42.528729    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:42.528729    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:42 GMT
	I0229 02:42:42.528729    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"2031","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:42:42.529970    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:42.529970    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:42.529970    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:42.529970    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:42.533380    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:42.534356    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:42.534356    1532 round_trippers.go:580]     Audit-Id: 9cbe03ff-dc1e-41e1-8605-8726572de6e0
	I0229 02:42:42.534356    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:42.534356    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:42.534356    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:42.534356    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:42.534356    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:42 GMT
	I0229 02:42:42.534356    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:43.022026    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:43.022284    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:43.022284    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:43.022284    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:43.027364    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:43.027415    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:43.027415    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:43.027415    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:43.027415    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:43 GMT
	I0229 02:42:43.027415    1532 round_trippers.go:580]     Audit-Id: 2903b758-0f7f-44bc-bb04-0f14ece6cd23
	I0229 02:42:43.027415    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:43.027415    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:43.027618    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"2031","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:42:43.028294    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:43.028294    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:43.028294    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:43.028294    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:43.032570    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:43.032570    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:43.032781    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:43.032781    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:43.032781    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:43.032781    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:43.032781    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:43 GMT
	I0229 02:42:43.032781    1532 round_trippers.go:580]     Audit-Id: 47db3b12-14e8-45ad-b614-b9128371f4f3
	I0229 02:42:43.032915    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:43.032915    1532 pod_ready.go:102] pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace has status "Ready":"False"
	I0229 02:42:43.519347    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:43.519347    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:43.519347    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:43.519347    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:43.523542    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:43.523877    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:43.523877    1532 round_trippers.go:580]     Audit-Id: 54d19cfe-c9a2-4fe8-8bd0-0c45ab35b9b8
	I0229 02:42:43.523877    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:43.523877    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:43.523877    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:43.523877    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:43.523877    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:43 GMT
	I0229 02:42:43.524115    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"2031","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:42:43.524729    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:43.524729    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:43.524729    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:43.524821    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:43.532692    1532 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 02:42:43.532692    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:43.532692    1532 round_trippers.go:580]     Audit-Id: b4c6fd04-fe2b-46ac-9940-8d06356a4d45
	I0229 02:42:43.532692    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:43.532692    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:43.532692    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:43.532692    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:43.532692    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:43 GMT
	I0229 02:42:43.533732    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:44.021540    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:44.021540    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:44.021540    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:44.021540    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:44.026106    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:44.026106    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:44.026106    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:44.026106    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:44.026106    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:44.026106    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:44 GMT
	I0229 02:42:44.026106    1532 round_trippers.go:580]     Audit-Id: eeba041c-3fd7-4a97-b248-0688dfba7107
	I0229 02:42:44.026106    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:44.026314    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"2031","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:42:44.027837    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:44.027925    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:44.027925    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:44.027925    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:44.032048    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:44.032048    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:44.032048    1532 round_trippers.go:580]     Audit-Id: 41d66d1a-610d-46bc-a671-5e494162c854
	I0229 02:42:44.032048    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:44.032117    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:44.032117    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:44.032117    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:44.032117    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:44 GMT
	I0229 02:42:44.032318    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:44.521256    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:44.521256    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:44.521256    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:44.521256    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:44.525906    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:44.525906    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:44.525906    1532 round_trippers.go:580]     Audit-Id: 436534d0-e307-4d91-b0c0-87f8ad073da8
	I0229 02:42:44.525906    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:44.525906    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:44.525906    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:44.525906    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:44.526074    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:44 GMT
	I0229 02:42:44.526284    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"2031","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:42:44.527011    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:44.527011    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:44.527011    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:44.527011    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:44.531019    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:44.531019    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:44.531019    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:44.531019    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:44 GMT
	I0229 02:42:44.531019    1532 round_trippers.go:580]     Audit-Id: 2dd9544f-0e4e-4fa1-8a03-2d17d422c845
	I0229 02:42:44.531019    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:44.531019    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:44.531019    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:44.531889    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:45.026958    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:45.026958    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:45.026958    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:45.026958    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:45.031368    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:45.031405    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:45.031405    1532 round_trippers.go:580]     Audit-Id: 0b8fea62-3449-4646-a7bf-807f4343e251
	I0229 02:42:45.031405    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:45.031405    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:45.031405    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:45.031486    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:45.031486    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:45 GMT
	I0229 02:42:45.031604    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"2031","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:42:45.032373    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:45.032440    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:45.032440    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:45.032440    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:45.036779    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:45.038215    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:45.038215    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:45 GMT
	I0229 02:42:45.038279    1532 round_trippers.go:580]     Audit-Id: 409e3d91-3cf5-4c78-a141-1d33c3c29618
	I0229 02:42:45.038279    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:45.038279    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:45.038426    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:45.038464    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:45.038717    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:45.039256    1532 pod_ready.go:102] pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace has status "Ready":"False"
	I0229 02:42:45.519371    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:45.519371    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:45.519371    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:45.519485    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:45.525854    1532 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 02:42:45.525854    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:45.525854    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:45.525854    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:45.525854    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:45.525854    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:45.525854    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:45 GMT
	I0229 02:42:45.525854    1532 round_trippers.go:580]     Audit-Id: f44e01be-ce76-4fd3-96de-b3cfa7e37ea0
	I0229 02:42:45.526540    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"2045","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6492 chars]
	I0229 02:42:45.527440    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:45.527440    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:45.527440    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:45.527440    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:45.531140    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:45.531140    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:45.531140    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:45.531140    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:45.531140    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:45.531140    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:45 GMT
	I0229 02:42:45.531140    1532 round_trippers.go:580]     Audit-Id: a78856b0-105e-4619-a5e8-dd84e8dfbefb
	I0229 02:42:45.531140    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:45.531140    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:45.531140    1532 pod_ready.go:92] pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace has status "Ready":"True"
	I0229 02:42:45.531140    1532 pod_ready.go:81] duration metric: took 16.0126552s waiting for pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace to be "Ready" ...
	I0229 02:42:45.531140    1532 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:42:45.531140    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-314500
	I0229 02:42:45.531140    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:45.531140    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:45.531140    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:45.535519    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:45.535519    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:45.535519    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:45.535519    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:45.535519    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:45 GMT
	I0229 02:42:45.535519    1532 round_trippers.go:580]     Audit-Id: 8567707e-fe98-4ad8-b2ca-3bc0079b1807
	I0229 02:42:45.535519    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:45.535519    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:45.535519    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-314500","namespace":"kube-system","uid":"64dda041-1f1d-4866-aa39-62d21bd84e46","resourceVersion":"2022","creationTimestamp":"2024-02-29T02:42:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.2.238:2379","kubernetes.io/config.hash":"96721f37a1f14642fee9a072efcaa322","kubernetes.io/config.mirror":"96721f37a1f14642fee9a072efcaa322","kubernetes.io/config.seen":"2024-02-29T02:42:14.259019103Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:42:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5853 chars]
	I0229 02:42:45.536545    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:45.536545    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:45.536545    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:45.536545    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:45.540166    1532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:42:45.540166    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:45.540166    1532 round_trippers.go:580]     Audit-Id: 7d770b43-824f-49f7-a8f9-72a750430413
	I0229 02:42:45.540166    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:45.540241    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:45.540241    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:45.540241    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:45.540241    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:45 GMT
	I0229 02:42:45.540435    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:45.540883    1532 pod_ready.go:92] pod "etcd-multinode-314500" in "kube-system" namespace has status "Ready":"True"
	I0229 02:42:45.540949    1532 pod_ready.go:81] duration metric: took 9.7432ms waiting for pod "etcd-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:42:45.540949    1532 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:42:45.541020    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-314500
	I0229 02:42:45.541020    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:45.541091    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:45.541091    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:45.543540    1532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:42:45.543540    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:45.543540    1532 round_trippers.go:580]     Audit-Id: 7b95ed88-3bdc-494c-b565-af3b814a4a52
	I0229 02:42:45.543540    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:45.543540    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:45.543540    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:45.543540    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:45.543540    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:45 GMT
	I0229 02:42:45.543540    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-314500","namespace":"kube-system","uid":"baa3dc33-6d86-4748-9d57-c64f45dcfbf7","resourceVersion":"2019","creationTimestamp":"2024-02-29T02:42:19Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.2.238:8443","kubernetes.io/config.hash":"a317731b2b94a8e14311676e58d24e16","kubernetes.io/config.mirror":"a317731b2b94a8e14311676e58d24e16","kubernetes.io/config.seen":"2024-02-29T02:42:14.259032504Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:42:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7391 chars]
	I0229 02:42:45.543540    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:45.543540    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:45.543540    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:45.543540    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:45.547201    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:45.547969    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:45.547969    1532 round_trippers.go:580]     Audit-Id: 6aa920d5-9bfc-49c3-9686-4e96bc639a85
	I0229 02:42:45.547969    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:45.547969    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:45.547969    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:45.547969    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:45.547969    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:45 GMT
	I0229 02:42:45.548186    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:45.548218    1532 pod_ready.go:92] pod "kube-apiserver-multinode-314500" in "kube-system" namespace has status "Ready":"True"
	I0229 02:42:45.548218    1532 pod_ready.go:81] duration metric: took 7.2686ms waiting for pod "kube-apiserver-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:42:45.548218    1532 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:42:45.548745    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-314500
	I0229 02:42:45.548785    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:45.548785    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:45.548785    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:45.550965    1532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:42:45.550965    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:45.550965    1532 round_trippers.go:580]     Audit-Id: 49d8abd2-c8d2-49a8-906c-fdeea0174ee6
	I0229 02:42:45.550965    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:45.550965    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:45.550965    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:45.550965    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:45.550965    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:45 GMT
	I0229 02:42:45.552055    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-314500","namespace":"kube-system","uid":"58e57902-e113-44a9-b5b5-4aba2ba13491","resourceVersion":"2021","creationTimestamp":"2024-02-29T02:15:52Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"46f4a0cce9ca64e19c1ad09d6f30ce1e","kubernetes.io/config.mirror":"46f4a0cce9ca64e19c1ad09d6f30ce1e","kubernetes.io/config.seen":"2024-02-29T02:15:52.221398986Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:15:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7171 chars]
	I0229 02:42:45.552589    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:45.552589    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:45.552589    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:45.552589    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:45.554828    1532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:42:45.554828    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:45.554828    1532 round_trippers.go:580]     Audit-Id: 17d2b563-9991-4d83-935e-58c9edfdd70f
	I0229 02:42:45.554828    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:45.554828    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:45.554828    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:45.554828    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:45.554828    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:45 GMT
	I0229 02:42:45.554828    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:45.554828    1532 pod_ready.go:92] pod "kube-controller-manager-multinode-314500" in "kube-system" namespace has status "Ready":"True"
	I0229 02:42:45.554828    1532 pod_ready.go:81] duration metric: took 6.6098ms waiting for pod "kube-controller-manager-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:42:45.554828    1532 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4gbrl" in "kube-system" namespace to be "Ready" ...
	I0229 02:42:45.554828    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4gbrl
	I0229 02:42:45.554828    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:45.554828    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:45.554828    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:45.559164    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:45.559164    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:45.559164    1532 round_trippers.go:580]     Audit-Id: 5505ef6f-d501-489a-9739-b14fe17d3c28
	I0229 02:42:45.559164    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:45.559164    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:45.559164    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:45.559164    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:45.559164    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:45 GMT
	I0229 02:42:45.559164    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4gbrl","generateName":"kube-proxy-","namespace":"kube-system","uid":"accb56cb-79ee-4f16-b05e-91bf554c4a60","resourceVersion":"1598","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"99934fe5-0d72-4e83-8f59-4a0b59969008","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"99934fe5-0d72-4e83-8f59-4a0b59969008\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5739 chars]
	I0229 02:42:45.560232    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:42:45.560232    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:45.560232    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:45.560232    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:45.562480    1532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:42:45.562794    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:45.562794    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:45.562794    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:45.562794    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:45.562794    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:45 GMT
	I0229 02:42:45.562794    1532 round_trippers.go:580]     Audit-Id: aa713fa4-7805-41f9-9c5b-de1e783a6770
	I0229 02:42:45.562794    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:45.562937    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"2332789d-7280-427a-9644-fc1ffcfc737d","resourceVersion":"1763","creationTimestamp":"2024-02-29T02:35:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:35:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3803 chars]
	I0229 02:42:45.563175    1532 pod_ready.go:92] pod "kube-proxy-4gbrl" in "kube-system" namespace has status "Ready":"True"
	I0229 02:42:45.563277    1532 pod_ready.go:81] duration metric: took 8.4485ms waiting for pod "kube-proxy-4gbrl" in "kube-system" namespace to be "Ready" ...
	I0229 02:42:45.563277    1532 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6r6j4" in "kube-system" namespace to be "Ready" ...
	I0229 02:42:45.729272    1532 request.go:629] Waited for 165.7218ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6r6j4
	I0229 02:42:45.729421    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6r6j4
	I0229 02:42:45.729421    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:45.729421    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:45.729421    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:45.733894    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:45.733894    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:45.733894    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:45 GMT
	I0229 02:42:45.733894    1532 round_trippers.go:580]     Audit-Id: 73735785-d973-4fb8-a4f5-93401373f12b
	I0229 02:42:45.733894    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:45.733894    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:45.733894    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:45.734021    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:45.734184    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6r6j4","generateName":"kube-proxy-","namespace":"kube-system","uid":"2b84b22d-3786-4f9e-a23a-c7cfc93bb671","resourceVersion":"1923","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"99934fe5-0d72-4e83-8f59-4a0b59969008","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"99934fe5-0d72-4e83-8f59-4a0b59969008\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5735 chars]
	I0229 02:42:45.929446    1532 request.go:629] Waited for 194.4118ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:45.929940    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:45.929992    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:45.929992    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:45.929992    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:45.934129    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:45.934129    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:45.934129    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:46 GMT
	I0229 02:42:45.934129    1532 round_trippers.go:580]     Audit-Id: 37fd20f5-1f19-4e6a-84d9-26d049e4a9b7
	I0229 02:42:45.934129    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:45.934129    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:45.934129    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:45.934129    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:45.934129    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:45.934812    1532 pod_ready.go:92] pod "kube-proxy-6r6j4" in "kube-system" namespace has status "Ready":"True"
	I0229 02:42:45.934897    1532 pod_ready.go:81] duration metric: took 371.5987ms waiting for pod "kube-proxy-6r6j4" in "kube-system" namespace to be "Ready" ...
	I0229 02:42:45.934897    1532 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:42:46.132712    1532 request.go:629] Waited for 197.7056ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-314500
	I0229 02:42:46.132712    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-314500
	I0229 02:42:46.132712    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:46.132712    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:46.132712    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:46.137669    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:46.137669    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:46.137669    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:46.137669    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:46 GMT
	I0229 02:42:46.137669    1532 round_trippers.go:580]     Audit-Id: d52651ee-608b-4ed2-aba0-86a6c9b316e0
	I0229 02:42:46.137669    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:46.137669    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:46.137669    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:46.138193    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-314500","namespace":"kube-system","uid":"31fcecc6-17de-43a6-892d-37cd915de64b","resourceVersion":"2006","creationTimestamp":"2024-02-29T02:15:52Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"3d9a79ff068a0922524863a8caa5053a","kubernetes.io/config.mirror":"3d9a79ff068a0922524863a8caa5053a","kubernetes.io/config.seen":"2024-02-29T02:15:52.221399886Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:15:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4901 chars]
	I0229 02:42:46.335467    1532 request.go:629] Waited for 196.596ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:46.335830    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:46.335830    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:46.335830    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:46.335830    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:46.340675    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:46.340675    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:46.340675    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:46.340675    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:46.340675    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:46.340675    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:46.340675    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:46 GMT
	I0229 02:42:46.340675    1532 round_trippers.go:580]     Audit-Id: ff091a9f-7e11-40a3-9bb2-080c0dc6884a
	I0229 02:42:46.341196    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:46.341885    1532 pod_ready.go:92] pod "kube-scheduler-multinode-314500" in "kube-system" namespace has status "Ready":"True"
	I0229 02:42:46.341973    1532 pod_ready.go:81] duration metric: took 407.0532ms waiting for pod "kube-scheduler-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:42:46.341973    1532 pod_ready.go:38] duration metric: took 16.8336558s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:42:46.342076    1532 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:42:46.352159    1532 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:42:46.379357    1532 command_runner.go:130] > 1764
	I0229 02:42:46.379430    1532 api_server.go:72] duration metric: took 22.9973833s to wait for apiserver process to appear ...
	I0229 02:42:46.379430    1532 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:42:46.379517    1532 api_server.go:253] Checking apiserver healthz at https://172.19.2.238:8443/healthz ...
	I0229 02:42:46.387427    1532 api_server.go:279] https://172.19.2.238:8443/healthz returned 200:
	ok
	I0229 02:42:46.387427    1532 round_trippers.go:463] GET https://172.19.2.238:8443/version
	I0229 02:42:46.387427    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:46.387427    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:46.387427    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:46.389008    1532 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 02:42:46.389621    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:46.389621    1532 round_trippers.go:580]     Audit-Id: 92aa30bd-81d9-475b-97e4-6e8fcd63cf76
	I0229 02:42:46.389621    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:46.389621    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:46.389621    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:46.389621    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:46.389621    1532 round_trippers.go:580]     Content-Length: 264
	I0229 02:42:46.389621    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:46 GMT
	I0229 02:42:46.389621    1532 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0229 02:42:46.389621    1532 api_server.go:141] control plane version: v1.28.4
	I0229 02:42:46.389621    1532 api_server.go:131] duration metric: took 10.1904ms to wait for apiserver health ...
	I0229 02:42:46.389621    1532 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:42:46.521976    1532 request.go:629] Waited for 132.0864ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods
	I0229 02:42:46.521976    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods
	I0229 02:42:46.521976    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:46.522171    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:46.522171    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:46.529571    1532 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 02:42:46.529571    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:46.529571    1532 round_trippers.go:580]     Audit-Id: a9ef5420-3445-4982-a012-2b92eeb07218
	I0229 02:42:46.529571    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:46.529571    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:46.529571    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:46.529571    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:46.529571    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:46 GMT
	I0229 02:42:46.530783    1532 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2049"},"items":[{"metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"2045","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 69073 chars]
	I0229 02:42:46.533544    1532 system_pods.go:59] 10 kube-system pods found
	I0229 02:42:46.533544    1532 system_pods.go:61] "coredns-5dd5756b68-8g6tg" [ef7fb259-9f24-4645-9eff-2b16f6789e1b] Running
	I0229 02:42:46.533544    1532 system_pods.go:61] "etcd-multinode-314500" [64dda041-1f1d-4866-aa39-62d21bd84e46] Running
	I0229 02:42:46.533544    1532 system_pods.go:61] "kindnet-6r7b8" [402c3ac1-05a9-45f1-aa7d-c0fb8ced6c87] Running
	I0229 02:42:46.533544    1532 system_pods.go:61] "kindnet-t9r77" [4620d417-744c-4049-82ab-79d1ee7f047c] Running
	I0229 02:42:46.533544    1532 system_pods.go:61] "kube-apiserver-multinode-314500" [baa3dc33-6d86-4748-9d57-c64f45dcfbf7] Running
	I0229 02:42:46.533544    1532 system_pods.go:61] "kube-controller-manager-multinode-314500" [58e57902-e113-44a9-b5b5-4aba2ba13491] Running
	I0229 02:42:46.533544    1532 system_pods.go:61] "kube-proxy-4gbrl" [accb56cb-79ee-4f16-b05e-91bf554c4a60] Running
	I0229 02:42:46.533544    1532 system_pods.go:61] "kube-proxy-6r6j4" [2b84b22d-3786-4f9e-a23a-c7cfc93bb671] Running
	I0229 02:42:46.533544    1532 system_pods.go:61] "kube-scheduler-multinode-314500" [31fcecc6-17de-43a6-892d-37cd915de64b] Running
	I0229 02:42:46.533544    1532 system_pods.go:61] "storage-provisioner" [9780520b-8ff9-408a-ab6f-41b63790ccd1] Running
	I0229 02:42:46.533544    1532 system_pods.go:74] duration metric: took 143.9154ms to wait for pod list to return data ...
	I0229 02:42:46.533544    1532 default_sa.go:34] waiting for default service account to be created ...
	I0229 02:42:46.725637    1532 request.go:629] Waited for 191.8574ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.238:8443/api/v1/namespaces/default/serviceaccounts
	I0229 02:42:46.725833    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/default/serviceaccounts
	I0229 02:42:46.725833    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:46.725833    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:46.725833    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:46.730177    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:46.730177    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:46.730177    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:46.730177    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:46.730177    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:46.730177    1532 round_trippers.go:580]     Content-Length: 262
	I0229 02:42:46.730177    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:46 GMT
	I0229 02:42:46.730177    1532 round_trippers.go:580]     Audit-Id: fc20ff84-61e3-40cc-b461-8b475f6d3577
	I0229 02:42:46.730177    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:46.730177    1532 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"2049"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"a442432a-e4e1-4889-bfa8-e3967acc17f0","resourceVersion":"330","creationTimestamp":"2024-02-29T02:16:04Z"}}]}
	I0229 02:42:46.730856    1532 default_sa.go:45] found service account: "default"
	I0229 02:42:46.730856    1532 default_sa.go:55] duration metric: took 197.3005ms for default service account to be created ...
	I0229 02:42:46.730856    1532 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 02:42:46.929039    1532 request.go:629] Waited for 197.8097ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods
	I0229 02:42:46.929303    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods
	I0229 02:42:46.929303    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:46.929303    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:46.929303    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:46.944000    1532 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0229 02:42:46.944096    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:46.944096    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:46.944096    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:46.944096    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:47 GMT
	I0229 02:42:46.944096    1532 round_trippers.go:580]     Audit-Id: f48d082c-2512-4352-a544-5d529df77a80
	I0229 02:42:46.944182    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:46.944182    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:46.945724    1532 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2049"},"items":[{"metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"2045","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 69073 chars]
	I0229 02:42:46.948796    1532 system_pods.go:86] 10 kube-system pods found
	I0229 02:42:46.948796    1532 system_pods.go:89] "coredns-5dd5756b68-8g6tg" [ef7fb259-9f24-4645-9eff-2b16f6789e1b] Running
	I0229 02:42:46.948796    1532 system_pods.go:89] "etcd-multinode-314500" [64dda041-1f1d-4866-aa39-62d21bd84e46] Running
	I0229 02:42:46.948796    1532 system_pods.go:89] "kindnet-6r7b8" [402c3ac1-05a9-45f1-aa7d-c0fb8ced6c87] Running
	I0229 02:42:46.948796    1532 system_pods.go:89] "kindnet-t9r77" [4620d417-744c-4049-82ab-79d1ee7f047c] Running
	I0229 02:42:46.948796    1532 system_pods.go:89] "kube-apiserver-multinode-314500" [baa3dc33-6d86-4748-9d57-c64f45dcfbf7] Running
	I0229 02:42:46.948796    1532 system_pods.go:89] "kube-controller-manager-multinode-314500" [58e57902-e113-44a9-b5b5-4aba2ba13491] Running
	I0229 02:42:46.948796    1532 system_pods.go:89] "kube-proxy-4gbrl" [accb56cb-79ee-4f16-b05e-91bf554c4a60] Running
	I0229 02:42:46.948796    1532 system_pods.go:89] "kube-proxy-6r6j4" [2b84b22d-3786-4f9e-a23a-c7cfc93bb671] Running
	I0229 02:42:46.948796    1532 system_pods.go:89] "kube-scheduler-multinode-314500" [31fcecc6-17de-43a6-892d-37cd915de64b] Running
	I0229 02:42:46.948796    1532 system_pods.go:89] "storage-provisioner" [9780520b-8ff9-408a-ab6f-41b63790ccd1] Running
	I0229 02:42:46.948796    1532 system_pods.go:126] duration metric: took 217.928ms to wait for k8s-apps to be running ...
	I0229 02:42:46.948796    1532 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 02:42:46.956925    1532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:42:46.989626    1532 system_svc.go:56] duration metric: took 40.8284ms WaitForService to wait for kubelet.
	I0229 02:42:46.989699    1532 kubeadm.go:581] duration metric: took 23.6076188s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 02:42:46.989772    1532 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:42:47.132105    1532 request.go:629] Waited for 142.2378ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.238:8443/api/v1/nodes
	I0229 02:42:47.132105    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes
	I0229 02:42:47.132105    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:47.132105    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:47.132105    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:47.137578    1532 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 02:42:47.137578    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:47.137657    1532 round_trippers.go:580]     Audit-Id: 4e0a653e-1255-436b-bbd0-721176de08e9
	I0229 02:42:47.137657    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:47.137657    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:47.137657    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:47.137739    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:47.137739    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:47 GMT
	I0229 02:42:47.138204    1532 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"2049"},"items":[{"metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 10085 chars]
	I0229 02:42:47.139146    1532 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:42:47.139222    1532 node_conditions.go:123] node cpu capacity is 2
	I0229 02:42:47.139222    1532 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:42:47.139222    1532 node_conditions.go:123] node cpu capacity is 2
	I0229 02:42:47.139222    1532 node_conditions.go:105] duration metric: took 149.4417ms to run NodePressure ...
	I0229 02:42:47.139222    1532 start.go:228] waiting for startup goroutines ...
	I0229 02:42:47.139298    1532 start.go:233] waiting for cluster config update ...
	I0229 02:42:47.139298    1532 start.go:242] writing updated cluster config ...
	I0229 02:42:47.154492    1532 config.go:182] Loaded profile config "multinode-314500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 02:42:47.154492    1532 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\config.json ...
	I0229 02:42:47.158114    1532 out.go:177] * Starting worker node multinode-314500-m02 in cluster multinode-314500
	I0229 02:42:47.158515    1532 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 02:42:47.158631    1532 cache.go:56] Caching tarball of preloaded images
	I0229 02:42:47.158872    1532 preload.go:174] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 02:42:47.159062    1532 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0229 02:42:47.159362    1532 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\config.json ...
	I0229 02:42:47.161474    1532 start.go:365] acquiring machines lock for multinode-314500-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 02:42:47.161703    1532 start.go:369] acquired machines lock for "multinode-314500-m02" in 112.6µs
	I0229 02:42:47.161806    1532 start.go:96] Skipping create...Using existing machine configuration
	I0229 02:42:47.161806    1532 fix.go:54] fixHost starting: m02
	I0229 02:42:47.162349    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:42:49.145688    1532 main.go:141] libmachine: [stdout =====>] : Off
	
	I0229 02:42:49.145688    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:42:49.145688    1532 fix.go:102] recreateIfNeeded on multinode-314500-m02: state=Stopped err=<nil>
	W0229 02:42:49.145688    1532 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 02:42:49.146500    1532 out.go:177] * Restarting existing hyperv VM for "multinode-314500-m02" ...
	I0229 02:42:49.147122    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-314500-m02
	I0229 02:42:51.854678    1532 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:42:51.854678    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:42:51.854772    1532 main.go:141] libmachine: Waiting for host to start...
	I0229 02:42:51.854772    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:42:53.926012    1532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:42:53.926012    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:42:53.926092    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:42:56.273041    1532 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:42:56.273041    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:42:57.283768    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:42:59.292759    1532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:42:59.292835    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:42:59.292869    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]

                                                
                                                
** /stderr **
multinode_test.go:384: failed to start cluster. args "out/minikube-windows-amd64.exe start -p multinode-314500 --wait=true -v=8 --alsologtostderr --driver=hyperv" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-314500 -n multinode-314500
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-314500 -n multinode-314500: (11.4732312s)
helpers_test.go:244: <<< TestMultiNode/serial/RestartMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-314500 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-314500 logs -n 25: (8.1486365s)
helpers_test.go:252: TestMultiNode/serial/RestartMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| kubectl | -p multinode-314500 -- apply -f                   | multinode-314500 | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                  |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- rollout                    | multinode-314500 | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | status deployment/busybox                         |                  |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- get pods -o                | multinode-314500 | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- get pods -o                | multinode-314500 | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- exec                       | multinode-314500 | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | busybox-5b5d89c9d6-826w2 --                       |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                  |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- exec                       | multinode-314500 | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | busybox-5b5d89c9d6-qcblm --                       |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                  |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- exec                       | multinode-314500 | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | busybox-5b5d89c9d6-826w2 --                       |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                  |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- exec                       | multinode-314500 | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | busybox-5b5d89c9d6-qcblm --                       |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                  |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- exec                       | multinode-314500 | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | busybox-5b5d89c9d6-826w2 -- nslookup              |                  |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- exec                       | multinode-314500 | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | busybox-5b5d89c9d6-qcblm -- nslookup              |                  |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- get pods -o                | multinode-314500 | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- exec                       | multinode-314500 | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | busybox-5b5d89c9d6-826w2                          |                  |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                  |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                  |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- exec                       | multinode-314500 | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC |                     |
	|         | busybox-5b5d89c9d6-826w2 -- sh                    |                  |                   |         |                     |                     |
	|         | -c ping -c 1 172.19.0.1                           |                  |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- exec                       | multinode-314500 | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC | 29 Feb 24 02:19 UTC |
	|         | busybox-5b5d89c9d6-qcblm                          |                  |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                  |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                  |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |                   |         |                     |                     |
	| kubectl | -p multinode-314500 -- exec                       | multinode-314500 | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:19 UTC |                     |
	|         | busybox-5b5d89c9d6-qcblm -- sh                    |                  |                   |         |                     |                     |
	|         | -c ping -c 1 172.19.0.1                           |                  |                   |         |                     |                     |
	| node    | add -p multinode-314500 -v 3                      | multinode-314500 | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:20 UTC |                     |
	|         | --alsologtostderr                                 |                  |                   |         |                     |                     |
	| node    | multinode-314500 node stop m03                    | multinode-314500 | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:25 UTC | 29 Feb 24 02:26 UTC |
	| node    | multinode-314500 node start                       | multinode-314500 | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:26 UTC | 29 Feb 24 02:29 UTC |
	|         | m03 --alsologtostderr                             |                  |                   |         |                     |                     |
	| node    | list -p multinode-314500                          | multinode-314500 | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:29 UTC |                     |
	| stop    | -p multinode-314500                               | multinode-314500 | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:29 UTC | 29 Feb 24 02:31 UTC |
	| start   | -p multinode-314500                               | multinode-314500 | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:31 UTC | 29 Feb 24 02:37 UTC |
	|         | --wait=true -v=8                                  |                  |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                  |                   |         |                     |                     |
	| node    | list -p multinode-314500                          | multinode-314500 | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:37 UTC |                     |
	| node    | multinode-314500 node delete                      | multinode-314500 | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:38 UTC | 29 Feb 24 02:38 UTC |
	|         | m03                                               |                  |                   |         |                     |                     |
	| stop    | multinode-314500 stop                             | multinode-314500 | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:39 UTC | 29 Feb 24 02:40 UTC |
	| start   | -p multinode-314500                               | multinode-314500 | minikube5\jenkins | v1.32.0 | 29 Feb 24 02:40 UTC |                     |
	|         | --wait=true -v=8                                  |                  |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                  |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                  |                   |         |                     |                     |
	|---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 02:40:22
	Running on machine: minikube5
	Binary: Built with gc go1.22.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 02:40:22.834525    1532 out.go:291] Setting OutFile to fd 1404 ...
	I0229 02:40:22.835418    1532 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:40:22.835465    1532 out.go:304] Setting ErrFile to fd 1460...
	I0229 02:40:22.835465    1532 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:40:22.854125    1532 out.go:298] Setting JSON to false
	I0229 02:40:22.857151    1532 start.go:129] hostinfo: {"hostname":"minikube5","uptime":270649,"bootTime":1708903773,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0229 02:40:22.857151    1532 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 02:40:22.858119    1532 out.go:177] * [multinode-314500] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0229 02:40:22.859160    1532 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 02:40:22.859160    1532 notify.go:220] Checking for updates...
	I0229 02:40:22.860125    1532 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 02:40:22.860125    1532 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0229 02:40:22.860125    1532 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 02:40:22.861129    1532 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 02:40:22.862130    1532 config.go:182] Loaded profile config "multinode-314500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 02:40:22.863132    1532 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 02:40:27.896918    1532 out.go:177] * Using the hyperv driver based on existing profile
	I0229 02:40:27.897733    1532 start.go:299] selected driver: hyperv
	I0229 02:40:27.897733    1532 start.go:903] validating driver "hyperv" against &{Name:multinode-314500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:multinode-314500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.19.2.252 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.4.42 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false k
ubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVM
netClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:40:27.897733    1532 start.go:914] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 02:40:27.943386    1532 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 02:40:27.943386    1532 cni.go:84] Creating CNI manager for ""
	I0229 02:40:27.943910    1532 cni.go:136] 2 nodes found, recommending kindnet
	I0229 02:40:27.943980    1532 start_flags.go:323] config:
	{Name:multinode-314500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-314500 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.19.2.252 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.4.42 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false
nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoP
auseInterval:1m0s}
	I0229 02:40:27.944621    1532 iso.go:125] acquiring lock: {Name:mk91f2ee29fbed5605669750e8cfa308a1229357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:40:27.945916    1532 out.go:177] * Starting control plane node multinode-314500 in cluster multinode-314500
	I0229 02:40:27.946502    1532 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 02:40:27.946573    1532 preload.go:148] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0229 02:40:27.946706    1532 cache.go:56] Caching tarball of preloaded images
	I0229 02:40:27.946966    1532 preload.go:174] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 02:40:27.946966    1532 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0229 02:40:27.946966    1532 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\config.json ...
	I0229 02:40:27.948916    1532 start.go:365] acquiring machines lock for multinode-314500: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 02:40:27.948916    1532 start.go:369] acquired machines lock for "multinode-314500" in 0s
	I0229 02:40:27.948916    1532 start.go:96] Skipping create...Using existing machine configuration
	I0229 02:40:27.948916    1532 fix.go:54] fixHost starting: 
	I0229 02:40:27.949876    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:40:30.565033    1532 main.go:141] libmachine: [stdout =====>] : Off
	
	I0229 02:40:30.565033    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:40:30.565366    1532 fix.go:102] recreateIfNeeded on multinode-314500: state=Stopped err=<nil>
	W0229 02:40:30.565439    1532 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 02:40:30.566359    1532 out.go:177] * Restarting existing hyperv VM for "multinode-314500" ...
	I0229 02:40:30.566983    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-314500
	I0229 02:40:33.280525    1532 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:40:33.280525    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:40:33.280603    1532 main.go:141] libmachine: Waiting for host to start...
	I0229 02:40:33.280603    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:40:35.410766    1532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:40:35.410766    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:40:35.411454    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:40:37.786830    1532 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:40:37.786830    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:40:38.791448    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:40:40.872557    1532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:40:40.872557    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:40:40.872623    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:40:43.245619    1532 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:40:43.245619    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:40:44.253451    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:40:46.352087    1532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:40:46.352087    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:40:46.352087    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:40:48.740536    1532 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:40:48.740536    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:40:49.744234    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:40:51.834642    1532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:40:51.834845    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:40:51.834937    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:40:54.178970    1532 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:40:54.178970    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:40:55.185017    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:40:57.216991    1532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:40:57.216991    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:40:57.217189    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:40:59.641935    1532 main.go:141] libmachine: [stdout =====>] : 172.19.2.238
	
	I0229 02:40:59.642528    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:40:59.645053    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:41:01.654539    1532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:41:01.654539    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:41:01.654627    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:41:04.090413    1532 main.go:141] libmachine: [stdout =====>] : 172.19.2.238
	
	I0229 02:41:04.090413    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:41:04.090413    1532 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\config.json ...
	I0229 02:41:04.093506    1532 machine.go:88] provisioning docker machine ...
	I0229 02:41:04.093506    1532 buildroot.go:166] provisioning hostname "multinode-314500"
	I0229 02:41:04.093506    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:41:06.093944    1532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:41:06.094952    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:41:06.094997    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:41:08.516462    1532 main.go:141] libmachine: [stdout =====>] : 172.19.2.238
	
	I0229 02:41:08.516462    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:41:08.521484    1532 main.go:141] libmachine: Using SSH client type: native
	I0229 02:41:08.521746    1532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.2.238 22 <nil> <nil>}
	I0229 02:41:08.521746    1532 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-314500 && echo "multinode-314500" | sudo tee /etc/hostname
	I0229 02:41:08.696369    1532 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-314500
	
	I0229 02:41:08.696369    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:41:10.744486    1532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:41:10.744486    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:41:10.744486    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:41:13.163550    1532 main.go:141] libmachine: [stdout =====>] : 172.19.2.238
	
	I0229 02:41:13.163550    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:41:13.168162    1532 main.go:141] libmachine: Using SSH client type: native
	I0229 02:41:13.168162    1532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.2.238 22 <nil> <nil>}
	I0229 02:41:13.168162    1532 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-314500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-314500/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-314500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:41:13.329613    1532 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:41:13.329830    1532 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0229 02:41:13.329830    1532 buildroot.go:174] setting up certificates
	I0229 02:41:13.329830    1532 provision.go:83] configureAuth start
	I0229 02:41:13.329923    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:41:15.326770    1532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:41:15.327541    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:41:15.327578    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:41:17.726055    1532 main.go:141] libmachine: [stdout =====>] : 172.19.2.238
	
	I0229 02:41:17.726055    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:41:17.726055    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:41:19.759154    1532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:41:19.759154    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:41:19.759678    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:41:22.186855    1532 main.go:141] libmachine: [stdout =====>] : 172.19.2.238
	
	I0229 02:41:22.186855    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:41:22.186855    1532 provision.go:138] copyHostCerts
	I0229 02:41:22.187571    1532 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0229 02:41:22.187669    1532 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0229 02:41:22.187669    1532 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0229 02:41:22.187669    1532 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0229 02:41:22.189039    1532 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0229 02:41:22.189274    1532 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0229 02:41:22.189274    1532 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0229 02:41:22.189566    1532 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0229 02:41:22.190325    1532 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0229 02:41:22.190587    1532 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0229 02:41:22.190648    1532 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0229 02:41:22.190648    1532 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1675 bytes)
	I0229 02:41:22.191700    1532 provision.go:112] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-314500 san=[172.19.2.238 172.19.2.238 localhost 127.0.0.1 minikube multinode-314500]
	I0229 02:41:22.671350    1532 provision.go:172] copyRemoteCerts
	I0229 02:41:22.680652    1532 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:41:22.680794    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:41:24.699199    1532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:41:24.699245    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:41:24.699306    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:41:27.138104    1532 main.go:141] libmachine: [stdout =====>] : 172.19.2.238
	
	I0229 02:41:27.138104    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:41:27.138104    1532 sshutil.go:53] new ssh client: &{IP:172.19.2.238 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\id_rsa Username:docker}
	I0229 02:41:27.254026    1532 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5729449s)
	I0229 02:41:27.254115    1532 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0229 02:41:27.254247    1532 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 02:41:27.298985    1532 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0229 02:41:27.299294    1532 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1224 bytes)
	I0229 02:41:27.347314    1532 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0229 02:41:27.347775    1532 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 02:41:27.394771    1532 provision.go:86] duration metric: configureAuth took 14.0640677s
	I0229 02:41:27.394771    1532 buildroot.go:189] setting minikube options for container-runtime
	I0229 02:41:27.395476    1532 config.go:182] Loaded profile config "multinode-314500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 02:41:27.395476    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:41:29.453121    1532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:41:29.453861    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:41:29.453939    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:41:31.860828    1532 main.go:141] libmachine: [stdout =====>] : 172.19.2.238
	
	I0229 02:41:31.861114    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:41:31.867223    1532 main.go:141] libmachine: Using SSH client type: native
	I0229 02:41:31.867745    1532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.2.238 22 <nil> <nil>}
	I0229 02:41:31.867837    1532 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 02:41:32.016154    1532 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0229 02:41:32.016241    1532 buildroot.go:70] root file system type: tmpfs
	I0229 02:41:32.016443    1532 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 02:41:32.016526    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:41:34.019135    1532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:41:34.019135    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:41:34.019210    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:41:36.440661    1532 main.go:141] libmachine: [stdout =====>] : 172.19.2.238
	
	I0229 02:41:36.440953    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:41:36.445080    1532 main.go:141] libmachine: Using SSH client type: native
	I0229 02:41:36.445494    1532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.2.238 22 <nil> <nil>}
	I0229 02:41:36.445494    1532 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 02:41:36.638749    1532 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 02:41:36.638749    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:41:38.699052    1532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:41:38.699052    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:41:38.699052    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:41:41.118719    1532 main.go:141] libmachine: [stdout =====>] : 172.19.2.238
	
	I0229 02:41:41.118719    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:41:41.123562    1532 main.go:141] libmachine: Using SSH client type: native
	I0229 02:41:41.124008    1532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.2.238 22 <nil> <nil>}
	I0229 02:41:41.124074    1532 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 02:41:42.558705    1532 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0229 02:41:42.559253    1532 machine.go:91] provisioned docker machine in 38.4636118s
	I0229 02:41:42.559253    1532 start.go:300] post-start starting for "multinode-314500" (driver="hyperv")
	I0229 02:41:42.559313    1532 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:41:42.568473    1532 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:41:42.568473    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:41:44.582129    1532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:41:44.582129    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:41:44.582201    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:41:46.982912    1532 main.go:141] libmachine: [stdout =====>] : 172.19.2.238
	
	I0229 02:41:46.983111    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:41:46.983218    1532 sshutil.go:53] new ssh client: &{IP:172.19.2.238 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\id_rsa Username:docker}
	I0229 02:41:47.088469    1532 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5197449s)
	I0229 02:41:47.098906    1532 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:41:47.105968    1532 command_runner.go:130] > NAME=Buildroot
	I0229 02:41:47.105968    1532 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0229 02:41:47.105968    1532 command_runner.go:130] > ID=buildroot
	I0229 02:41:47.105968    1532 command_runner.go:130] > VERSION_ID=2023.02.9
	I0229 02:41:47.105968    1532 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0229 02:41:47.105968    1532 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 02:41:47.105968    1532 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0229 02:41:47.106822    1532 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0229 02:41:47.107546    1532 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem -> 33122.pem in /etc/ssl/certs
	I0229 02:41:47.107546    1532 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem -> /etc/ssl/certs/33122.pem
	I0229 02:41:47.116966    1532 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 02:41:47.136951    1532 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem --> /etc/ssl/certs/33122.pem (1708 bytes)
	I0229 02:41:47.183763    1532 start.go:303] post-start completed in 4.6241932s
	I0229 02:41:47.183763    1532 fix.go:56] fixHost completed within 1m19.2304508s
	I0229 02:41:47.183763    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:41:49.198966    1532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:41:49.198966    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:41:49.198966    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:41:51.582311    1532 main.go:141] libmachine: [stdout =====>] : 172.19.2.238
	
	I0229 02:41:51.583115    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:41:51.587985    1532 main.go:141] libmachine: Using SSH client type: native
	I0229 02:41:51.588686    1532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.2.238 22 <nil> <nil>}
	I0229 02:41:51.588686    1532 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 02:41:51.731414    1532 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709174511.891458216
	
	I0229 02:41:51.731414    1532 fix.go:206] guest clock: 1709174511.891458216
	I0229 02:41:51.731414    1532 fix.go:219] Guest: 2024-02-29 02:41:51.891458216 +0000 UTC Remote: 2024-02-29 02:41:47.183763 +0000 UTC m=+84.494854101 (delta=4.707695216s)
	I0229 02:41:51.731414    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:41:53.759386    1532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:41:53.760162    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:41:53.760162    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:41:56.204141    1532 main.go:141] libmachine: [stdout =====>] : 172.19.2.238
	
	I0229 02:41:56.204141    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:41:56.206039    1532 main.go:141] libmachine: Using SSH client type: native
	I0229 02:41:56.206039    1532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.2.238 22 <nil> <nil>}
	I0229 02:41:56.206039    1532 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709174511
	I0229 02:41:56.368921    1532 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Feb 29 02:41:51 UTC 2024
	
	I0229 02:41:56.368921    1532 fix.go:226] clock set: Thu Feb 29 02:41:51 UTC 2024
	 (err=<nil>)
	I0229 02:41:56.368921    1532 start.go:83] releasing machines lock for "multinode-314500", held for 1m28.4150988s
	I0229 02:41:56.369147    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:41:58.382753    1532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:41:58.382753    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:41:58.382753    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:42:00.755552    1532 main.go:141] libmachine: [stdout =====>] : 172.19.2.238
	
	I0229 02:42:00.755945    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:42:00.760699    1532 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:42:00.760802    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:42:00.766702    1532 ssh_runner.go:195] Run: cat /version.json
	I0229 02:42:00.766702    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:42:02.779747    1532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:42:02.779747    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:42:02.779848    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:42:02.782057    1532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:42:02.782057    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:42:02.782274    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:42:05.266914    1532 main.go:141] libmachine: [stdout =====>] : 172.19.2.238
	
	I0229 02:42:05.266914    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:42:05.267586    1532 sshutil.go:53] new ssh client: &{IP:172.19.2.238 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\id_rsa Username:docker}
	I0229 02:42:05.290813    1532 main.go:141] libmachine: [stdout =====>] : 172.19.2.238
	
	I0229 02:42:05.291067    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:42:05.291227    1532 sshutil.go:53] new ssh client: &{IP:172.19.2.238 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\id_rsa Username:docker}
	I0229 02:42:05.376179    1532 command_runner.go:130] > {"iso_version": "v1.32.1-1708638130-18020", "kicbase_version": "v0.0.42-1708008208-17936", "minikube_version": "v1.32.0", "commit": "d80143d2abd5a004b09b48bbc118a104326900af"}
	I0229 02:42:05.376329    1532 ssh_runner.go:235] Completed: cat /version.json: (4.6093709s)
	I0229 02:42:05.388449    1532 ssh_runner.go:195] Run: systemctl --version
	I0229 02:42:05.507931    1532 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0229 02:42:05.508138    1532 command_runner.go:130] > systemd 252 (252)
	I0229 02:42:05.508138    1532 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.7471387s)
	I0229 02:42:05.508138    1532 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0229 02:42:05.517184    1532 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0229 02:42:05.525754    1532 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0229 02:42:05.525754    1532 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 02:42:05.536848    1532 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:42:05.564981    1532 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0229 02:42:05.565079    1532 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 02:42:05.565079    1532 start.go:475] detecting cgroup driver to use...
	I0229 02:42:05.565482    1532 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:42:05.599297    1532 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0229 02:42:05.608280    1532 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0229 02:42:05.637070    1532 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 02:42:05.656188    1532 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 02:42:05.664958    1532 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 02:42:05.693329    1532 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 02:42:05.721902    1532 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 02:42:05.750212    1532 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 02:42:05.777556    1532 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:42:05.807365    1532 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 02:42:05.835742    1532 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:42:05.854932    1532 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0229 02:42:05.863887    1532 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:42:05.890144    1532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:42:06.097343    1532 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 02:42:06.129556    1532 start.go:475] detecting cgroup driver to use...
	I0229 02:42:06.140526    1532 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 02:42:06.166113    1532 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0229 02:42:06.166113    1532 command_runner.go:130] > [Unit]
	I0229 02:42:06.166113    1532 command_runner.go:130] > Description=Docker Application Container Engine
	I0229 02:42:06.166113    1532 command_runner.go:130] > Documentation=https://docs.docker.com
	I0229 02:42:06.166113    1532 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0229 02:42:06.166113    1532 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0229 02:42:06.166113    1532 command_runner.go:130] > StartLimitBurst=3
	I0229 02:42:06.166113    1532 command_runner.go:130] > StartLimitIntervalSec=60
	I0229 02:42:06.166113    1532 command_runner.go:130] > [Service]
	I0229 02:42:06.166113    1532 command_runner.go:130] > Type=notify
	I0229 02:42:06.166113    1532 command_runner.go:130] > Restart=on-failure
	I0229 02:42:06.166113    1532 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0229 02:42:06.167115    1532 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0229 02:42:06.167115    1532 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0229 02:42:06.167115    1532 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0229 02:42:06.167115    1532 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0229 02:42:06.167115    1532 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0229 02:42:06.167115    1532 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0229 02:42:06.167115    1532 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0229 02:42:06.167115    1532 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0229 02:42:06.167115    1532 command_runner.go:130] > ExecStart=
	I0229 02:42:06.167115    1532 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0229 02:42:06.167115    1532 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0229 02:42:06.167115    1532 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0229 02:42:06.167115    1532 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0229 02:42:06.167115    1532 command_runner.go:130] > LimitNOFILE=infinity
	I0229 02:42:06.167115    1532 command_runner.go:130] > LimitNPROC=infinity
	I0229 02:42:06.167115    1532 command_runner.go:130] > LimitCORE=infinity
	I0229 02:42:06.167115    1532 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0229 02:42:06.167115    1532 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0229 02:42:06.167115    1532 command_runner.go:130] > TasksMax=infinity
	I0229 02:42:06.167115    1532 command_runner.go:130] > TimeoutStartSec=0
	I0229 02:42:06.167115    1532 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0229 02:42:06.167115    1532 command_runner.go:130] > Delegate=yes
	I0229 02:42:06.167115    1532 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0229 02:42:06.167115    1532 command_runner.go:130] > KillMode=process
	I0229 02:42:06.167115    1532 command_runner.go:130] > [Install]
	I0229 02:42:06.167115    1532 command_runner.go:130] > WantedBy=multi-user.target
	I0229 02:42:06.176637    1532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:42:06.206705    1532 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 02:42:06.242628    1532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:42:06.280954    1532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 02:42:06.312303    1532 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 02:42:06.362775    1532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 02:42:06.385494    1532 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:42:06.418911    1532 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0229 02:42:06.429451    1532 ssh_runner.go:195] Run: which cri-dockerd
	I0229 02:42:06.434887    1532 command_runner.go:130] > /usr/bin/cri-dockerd
	I0229 02:42:06.444028    1532 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 02:42:06.460928    1532 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0229 02:42:06.503181    1532 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 02:42:06.712738    1532 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 02:42:06.915962    1532 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 02:42:06.916311    1532 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 02:42:06.960512    1532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:42:07.163380    1532 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 02:42:08.798372    1532 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.6349004s)
	I0229 02:42:08.808627    1532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0229 02:42:08.843561    1532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 02:42:08.876982    1532 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0229 02:42:09.089675    1532 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0229 02:42:09.283179    1532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:42:09.504491    1532 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0229 02:42:09.546886    1532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 02:42:09.582352    1532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:42:09.774487    1532 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0229 02:42:09.879578    1532 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0229 02:42:09.888818    1532 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0229 02:42:09.898969    1532 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0229 02:42:09.898969    1532 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0229 02:42:09.898969    1532 command_runner.go:130] > Device: 0,22	Inode: 858         Links: 1
	I0229 02:42:09.898969    1532 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0229 02:42:09.898969    1532 command_runner.go:130] > Access: 2024-02-29 02:42:09.968763905 +0000
	I0229 02:42:09.898969    1532 command_runner.go:130] > Modify: 2024-02-29 02:42:09.968763905 +0000
	I0229 02:42:09.898969    1532 command_runner.go:130] > Change: 2024-02-29 02:42:09.973764265 +0000
	I0229 02:42:09.898969    1532 command_runner.go:130] >  Birth: -
	I0229 02:42:09.898969    1532 start.go:543] Will wait 60s for crictl version
	I0229 02:42:09.910720    1532 ssh_runner.go:195] Run: which crictl
	I0229 02:42:09.917529    1532 command_runner.go:130] > /usr/bin/crictl
	I0229 02:42:09.925899    1532 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 02:42:10.004576    1532 command_runner.go:130] > Version:  0.1.0
	I0229 02:42:10.004576    1532 command_runner.go:130] > RuntimeName:  docker
	I0229 02:42:10.004576    1532 command_runner.go:130] > RuntimeVersion:  24.0.7
	I0229 02:42:10.004576    1532 command_runner.go:130] > RuntimeApiVersion:  v1
	I0229 02:42:10.004576    1532 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0229 02:42:10.012089    1532 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 02:42:10.042590    1532 command_runner.go:130] > 24.0.7
	I0229 02:42:10.051675    1532 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 02:42:10.084994    1532 command_runner.go:130] > 24.0.7
	I0229 02:42:10.087099    1532 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0229 02:42:10.087414    1532 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0229 02:42:10.092456    1532 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0229 02:42:10.092778    1532 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0229 02:42:10.092778    1532 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0229 02:42:10.092778    1532 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:a6:a3:c1 Flags:up|broadcast|multicast|running}
	I0229 02:42:10.095994    1532 ip.go:210] interface addr: fe80::fc78:4865:5cac:d448/64
	I0229 02:42:10.095994    1532 ip.go:210] interface addr: 172.19.0.1/20
	I0229 02:42:10.105006    1532 ssh_runner.go:195] Run: grep 172.19.0.1	host.minikube.internal$ /etc/hosts
	I0229 02:42:10.112690    1532 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:42:10.136098    1532 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 02:42:10.144183    1532 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 02:42:10.177878    1532 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0229 02:42:10.177913    1532 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0229 02:42:10.177913    1532 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0229 02:42:10.177913    1532 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0229 02:42:10.177913    1532 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0229 02:42:10.177913    1532 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0229 02:42:10.177913    1532 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0229 02:42:10.177913    1532 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0229 02:42:10.177913    1532 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:42:10.177913    1532 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0229 02:42:10.177913    1532 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0229 02:42:10.177913    1532 docker.go:615] Images already preloaded, skipping extraction
	I0229 02:42:10.188018    1532 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 02:42:10.215735    1532 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0229 02:42:10.216589    1532 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0229 02:42:10.216589    1532 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0229 02:42:10.216589    1532 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0229 02:42:10.216589    1532 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0229 02:42:10.216589    1532 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0229 02:42:10.216660    1532 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0229 02:42:10.216660    1532 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0229 02:42:10.216660    1532 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:42:10.216660    1532 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0229 02:42:10.217527    1532 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0229 02:42:10.217599    1532 cache_images.go:84] Images are preloaded, skipping loading
	I0229 02:42:10.223963    1532 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0229 02:42:10.265074    1532 command_runner.go:130] > cgroupfs
	I0229 02:42:10.266245    1532 cni.go:84] Creating CNI manager for ""
	I0229 02:42:10.266571    1532 cni.go:136] 2 nodes found, recommending kindnet
	I0229 02:42:10.266636    1532 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 02:42:10.266810    1532 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.2.238 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-314500 NodeName:multinode-314500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.2.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.2.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 02:42:10.267257    1532 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.2.238
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-314500"
	  kubeletExtraArgs:
	    node-ip: 172.19.2.238
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.2.238"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 02:42:10.267324    1532 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-314500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.2.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-314500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 02:42:10.279608    1532 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 02:42:10.298239    1532 command_runner.go:130] > kubeadm
	I0229 02:42:10.298645    1532 command_runner.go:130] > kubectl
	I0229 02:42:10.298645    1532 command_runner.go:130] > kubelet
	I0229 02:42:10.298689    1532 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 02:42:10.309157    1532 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 02:42:10.327724    1532 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I0229 02:42:10.360450    1532 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 02:42:10.392713    1532 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0229 02:42:10.445623    1532 ssh_runner.go:195] Run: grep 172.19.2.238	control-plane.minikube.internal$ /etc/hosts
	I0229 02:42:10.451977    1532 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.2.238	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:42:10.475979    1532 certs.go:56] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500 for IP: 172.19.2.238
	I0229 02:42:10.475979    1532 certs.go:190] acquiring lock for shared ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:42:10.476620    1532 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0229 02:42:10.476867    1532 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0229 02:42:10.477689    1532 certs.go:315] skipping minikube-user signed cert generation: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\client.key
	I0229 02:42:10.477853    1532 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.key.a332be12
	I0229 02:42:10.477937    1532 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.crt.a332be12 with IP's: [172.19.2.238 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 02:42:10.818670    1532 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.crt.a332be12 ...
	I0229 02:42:10.818670    1532 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.crt.a332be12: {Name:mk3ff66c4da8459c2353911ccafdd38e8120ad31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:42:10.820838    1532 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.key.a332be12 ...
	I0229 02:42:10.820838    1532 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.key.a332be12: {Name:mk8c3a0e50e51af8a0d05e6aeeb6785226bd1a78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:42:10.821202    1532 certs.go:337] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.crt.a332be12 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.crt
	I0229 02:42:10.834129    1532 certs.go:341] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.key.a332be12 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.key
	I0229 02:42:10.836178    1532 certs.go:315] skipping aggregator signed cert generation: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\proxy-client.key
	I0229 02:42:10.836178    1532 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0229 02:42:10.836536    1532 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0229 02:42:10.836863    1532 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0229 02:42:10.836863    1532 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0229 02:42:10.836863    1532 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0229 02:42:10.836863    1532 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0229 02:42:10.837501    1532 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0229 02:42:10.837598    1532 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0229 02:42:10.838108    1532 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3312.pem (1338 bytes)
	W0229 02:42:10.838250    1532 certs.go:433] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3312_empty.pem, impossibly tiny 0 bytes
	I0229 02:42:10.838558    1532 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0229 02:42:10.838716    1532 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0229 02:42:10.838716    1532 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0229 02:42:10.838716    1532 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0229 02:42:10.839455    1532 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem (1708 bytes)
	I0229 02:42:10.839598    1532 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3312.pem -> /usr/share/ca-certificates/3312.pem
	I0229 02:42:10.839670    1532 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem -> /usr/share/ca-certificates/33122.pem
	I0229 02:42:10.839670    1532 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:42:10.840943    1532 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 02:42:10.890064    1532 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 02:42:10.939684    1532 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 02:42:10.984389    1532 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0229 02:42:11.029094    1532 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 02:42:11.075299    1532 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 02:42:11.125697    1532 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 02:42:11.171969    1532 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 02:42:11.222096    1532 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3312.pem --> /usr/share/ca-certificates/3312.pem (1338 bytes)
	I0229 02:42:11.266029    1532 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem --> /usr/share/ca-certificates/33122.pem (1708 bytes)
	I0229 02:42:11.317063    1532 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 02:42:11.361913    1532 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 02:42:11.405569    1532 ssh_runner.go:195] Run: openssl version
	I0229 02:42:11.413951    1532 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0229 02:42:11.424695    1532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/33122.pem && ln -fs /usr/share/ca-certificates/33122.pem /etc/ssl/certs/33122.pem"
	I0229 02:42:11.453890    1532 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/33122.pem
	I0229 02:42:11.462287    1532 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 29 00:59 /usr/share/ca-certificates/33122.pem
	I0229 02:42:11.462446    1532 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 00:59 /usr/share/ca-certificates/33122.pem
	I0229 02:42:11.471326    1532 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/33122.pem
	I0229 02:42:11.481128    1532 command_runner.go:130] > 3ec20f2e
	I0229 02:42:11.490731    1532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/33122.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 02:42:11.519460    1532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 02:42:11.551975    1532 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:42:11.559902    1532 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 29 00:45 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:42:11.560002    1532 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 00:45 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:42:11.568323    1532 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:42:11.575353    1532 command_runner.go:130] > b5213941
	I0229 02:42:11.586279    1532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 02:42:11.616513    1532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3312.pem && ln -fs /usr/share/ca-certificates/3312.pem /etc/ssl/certs/3312.pem"
	I0229 02:42:11.648476    1532 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3312.pem
	I0229 02:42:11.655490    1532 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 29 00:59 /usr/share/ca-certificates/3312.pem
	I0229 02:42:11.656578    1532 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 00:59 /usr/share/ca-certificates/3312.pem
	I0229 02:42:11.665391    1532 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3312.pem
	I0229 02:42:11.674597    1532 command_runner.go:130] > 51391683
	I0229 02:42:11.683686    1532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3312.pem /etc/ssl/certs/51391683.0"
	I0229 02:42:11.713263    1532 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 02:42:11.721858    1532 command_runner.go:130] > ca.crt
	I0229 02:42:11.721858    1532 command_runner.go:130] > ca.key
	I0229 02:42:11.721858    1532 command_runner.go:130] > healthcheck-client.crt
	I0229 02:42:11.721858    1532 command_runner.go:130] > healthcheck-client.key
	I0229 02:42:11.721858    1532 command_runner.go:130] > peer.crt
	I0229 02:42:11.721858    1532 command_runner.go:130] > peer.key
	I0229 02:42:11.721858    1532 command_runner.go:130] > server.crt
	I0229 02:42:11.721858    1532 command_runner.go:130] > server.key
	I0229 02:42:11.731210    1532 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 02:42:11.741808    1532 command_runner.go:130] > Certificate will not expire
	I0229 02:42:11.752797    1532 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 02:42:11.763403    1532 command_runner.go:130] > Certificate will not expire
	I0229 02:42:11.772221    1532 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 02:42:11.783216    1532 command_runner.go:130] > Certificate will not expire
	I0229 02:42:11.793229    1532 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 02:42:11.803076    1532 command_runner.go:130] > Certificate will not expire
	I0229 02:42:11.813608    1532 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 02:42:11.824698    1532 command_runner.go:130] > Certificate will not expire
	I0229 02:42:11.835071    1532 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 02:42:11.846370    1532 command_runner.go:130] > Certificate will not expire
	I0229 02:42:11.846370    1532 kubeadm.go:404] StartCluster: {Name:multinode-314500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.28.4 ClusterName:multinode-314500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.19.2.238 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.4.42 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevi
rt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socke
tVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:42:11.854019    1532 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 02:42:11.893381    1532 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 02:42:11.913052    1532 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0229 02:42:11.913052    1532 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0229 02:42:11.913052    1532 command_runner.go:130] > /var/lib/minikube/etcd:
	I0229 02:42:11.913052    1532 command_runner.go:130] > member
	I0229 02:42:11.913052    1532 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 02:42:11.913052    1532 kubeadm.go:636] restartCluster start
	I0229 02:42:11.925527    1532 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 02:42:11.943318    1532 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:42:11.944330    1532 kubeconfig.go:135] verify returned: extract IP: "multinode-314500" does not appear in C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 02:42:11.944330    1532 kubeconfig.go:146] "multinode-314500" context is missing from C:\Users\jenkins.minikube5\minikube-integration\kubeconfig - will repair!
	I0229 02:42:11.945327    1532 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:42:11.957313    1532 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 02:42:11.958320    1532 kapi.go:59] client config for multinode-314500: &rest.Config{Host:"https://172.19.2.238:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500/client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500/client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:
[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2480600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 02:42:11.959322    1532 cert_rotation.go:137] Starting client certificate rotation controller
	I0229 02:42:11.968317    1532 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 02:42:11.986929    1532 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0229 02:42:11.986929    1532 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0229 02:42:11.986929    1532 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0229 02:42:11.986929    1532 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0229 02:42:11.986929    1532 command_runner.go:130] >  kind: InitConfiguration
	I0229 02:42:11.986929    1532 command_runner.go:130] >  localAPIEndpoint:
	I0229 02:42:11.986929    1532 command_runner.go:130] > -  advertiseAddress: 172.19.2.252
	I0229 02:42:11.986929    1532 command_runner.go:130] > +  advertiseAddress: 172.19.2.238
	I0229 02:42:11.986929    1532 command_runner.go:130] >    bindPort: 8443
	I0229 02:42:11.986929    1532 command_runner.go:130] >  bootstrapTokens:
	I0229 02:42:11.986929    1532 command_runner.go:130] >    - groups:
	I0229 02:42:11.986929    1532 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0229 02:42:11.986929    1532 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0229 02:42:11.986929    1532 command_runner.go:130] >    name: "multinode-314500"
	I0229 02:42:11.986929    1532 command_runner.go:130] >    kubeletExtraArgs:
	I0229 02:42:11.986929    1532 command_runner.go:130] > -    node-ip: 172.19.2.252
	I0229 02:42:11.986929    1532 command_runner.go:130] > +    node-ip: 172.19.2.238
	I0229 02:42:11.986929    1532 command_runner.go:130] >    taints: []
	I0229 02:42:11.986929    1532 command_runner.go:130] >  ---
	I0229 02:42:11.986929    1532 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0229 02:42:11.986929    1532 command_runner.go:130] >  kind: ClusterConfiguration
	I0229 02:42:11.986929    1532 command_runner.go:130] >  apiServer:
	I0229 02:42:11.986929    1532 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.19.2.252"]
	I0229 02:42:11.986929    1532 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.19.2.238"]
	I0229 02:42:11.986929    1532 command_runner.go:130] >    extraArgs:
	I0229 02:42:11.986929    1532 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0229 02:42:11.987453    1532 command_runner.go:130] >  controllerManager:
	I0229 02:42:11.987637    1532 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.19.2.252
	+  advertiseAddress: 172.19.2.238
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-314500"
	   kubeletExtraArgs:
	-    node-ip: 172.19.2.252
	+    node-ip: 172.19.2.238
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.19.2.252"]
	+  certSANs: ["127.0.0.1", "localhost", "172.19.2.238"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0229 02:42:11.987702    1532 kubeadm.go:1135] stopping kube-system containers ...
	I0229 02:42:11.993619    1532 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 02:42:12.025972    1532 command_runner.go:130] > 72b2d832587c
	I0229 02:42:12.026007    1532 command_runner.go:130] > 5814ae38cea0
	I0229 02:42:12.026007    1532 command_runner.go:130] > e767eb473501
	I0229 02:42:12.026007    1532 command_runner.go:130] > 1993ffe76ae7
	I0229 02:42:12.026007    1532 command_runner.go:130] > b606f60fc884
	I0229 02:42:12.026007    1532 command_runner.go:130] > 341278d602dd
	I0229 02:42:12.026007    1532 command_runner.go:130] > 349bdaee8eb9
	I0229 02:42:12.026007    1532 command_runner.go:130] > b37b7f8a0d78
	I0229 02:42:12.026007    1532 command_runner.go:130] > 02fbddb29c60
	I0229 02:42:12.026007    1532 command_runner.go:130] > ada445c976af
	I0229 02:42:12.026007    1532 command_runner.go:130] > 795e8c684507
	I0229 02:42:12.026007    1532 command_runner.go:130] > f1cb36bcb3f3
	I0229 02:42:12.026007    1532 command_runner.go:130] > 41745010357f
	I0229 02:42:12.026007    1532 command_runner.go:130] > 9d23233978a7
	I0229 02:42:12.026007    1532 command_runner.go:130] > 252fb20145ea
	I0229 02:42:12.026007    1532 command_runner.go:130] > 340bdcfacbe2
	I0229 02:42:12.026007    1532 command_runner.go:130] > 007d6c9a53e1
	I0229 02:42:12.026007    1532 command_runner.go:130] > 11c14ebdfaf6
	I0229 02:42:12.026007    1532 command_runner.go:130] > 8c944d91b625
	I0229 02:42:12.026007    1532 command_runner.go:130] > dd61788b0a0d
	I0229 02:42:12.026007    1532 command_runner.go:130] > c93e33130746
	I0229 02:42:12.026007    1532 command_runner.go:130] > 4b10f8bd940b
	I0229 02:42:12.026007    1532 command_runner.go:130] > edb41bd5e75d
	I0229 02:42:12.026007    1532 command_runner.go:130] > ab0c4864aee5
	I0229 02:42:12.026007    1532 command_runner.go:130] > 26b1ab05f99a
	I0229 02:42:12.026007    1532 command_runner.go:130] > bf7b9750ae9e
	I0229 02:42:12.026007    1532 command_runner.go:130] > 96810146c69c
	I0229 02:42:12.026625    1532 docker.go:483] Stopping containers: [72b2d832587c 5814ae38cea0 e767eb473501 1993ffe76ae7 b606f60fc884 341278d602dd 349bdaee8eb9 b37b7f8a0d78 02fbddb29c60 ada445c976af 795e8c684507 f1cb36bcb3f3 41745010357f 9d23233978a7 252fb20145ea 340bdcfacbe2 007d6c9a53e1 11c14ebdfaf6 8c944d91b625 dd61788b0a0d c93e33130746 4b10f8bd940b edb41bd5e75d ab0c4864aee5 26b1ab05f99a bf7b9750ae9e 96810146c69c]
	I0229 02:42:12.033808    1532 ssh_runner.go:195] Run: docker stop 72b2d832587c 5814ae38cea0 e767eb473501 1993ffe76ae7 b606f60fc884 341278d602dd 349bdaee8eb9 b37b7f8a0d78 02fbddb29c60 ada445c976af 795e8c684507 f1cb36bcb3f3 41745010357f 9d23233978a7 252fb20145ea 340bdcfacbe2 007d6c9a53e1 11c14ebdfaf6 8c944d91b625 dd61788b0a0d c93e33130746 4b10f8bd940b edb41bd5e75d ab0c4864aee5 26b1ab05f99a bf7b9750ae9e 96810146c69c
	I0229 02:42:12.065768    1532 command_runner.go:130] > 72b2d832587c
	I0229 02:42:12.065768    1532 command_runner.go:130] > 5814ae38cea0
	I0229 02:42:12.065835    1532 command_runner.go:130] > e767eb473501
	I0229 02:42:12.065835    1532 command_runner.go:130] > 1993ffe76ae7
	I0229 02:42:12.065866    1532 command_runner.go:130] > b606f60fc884
	I0229 02:42:12.065927    1532 command_runner.go:130] > 341278d602dd
	I0229 02:42:12.065927    1532 command_runner.go:130] > 349bdaee8eb9
	I0229 02:42:12.065998    1532 command_runner.go:130] > b37b7f8a0d78
	I0229 02:42:12.065998    1532 command_runner.go:130] > 02fbddb29c60
	I0229 02:42:12.065998    1532 command_runner.go:130] > ada445c976af
	I0229 02:42:12.065998    1532 command_runner.go:130] > 795e8c684507
	I0229 02:42:12.065998    1532 command_runner.go:130] > f1cb36bcb3f3
	I0229 02:42:12.065998    1532 command_runner.go:130] > 41745010357f
	I0229 02:42:12.065998    1532 command_runner.go:130] > 9d23233978a7
	I0229 02:42:12.065998    1532 command_runner.go:130] > 252fb20145ea
	I0229 02:42:12.065998    1532 command_runner.go:130] > 340bdcfacbe2
	I0229 02:42:12.065998    1532 command_runner.go:130] > 007d6c9a53e1
	I0229 02:42:12.065998    1532 command_runner.go:130] > 11c14ebdfaf6
	I0229 02:42:12.065998    1532 command_runner.go:130] > 8c944d91b625
	I0229 02:42:12.065998    1532 command_runner.go:130] > dd61788b0a0d
	I0229 02:42:12.065998    1532 command_runner.go:130] > c93e33130746
	I0229 02:42:12.065998    1532 command_runner.go:130] > 4b10f8bd940b
	I0229 02:42:12.065998    1532 command_runner.go:130] > edb41bd5e75d
	I0229 02:42:12.065998    1532 command_runner.go:130] > ab0c4864aee5
	I0229 02:42:12.065998    1532 command_runner.go:130] > 26b1ab05f99a
	I0229 02:42:12.065998    1532 command_runner.go:130] > bf7b9750ae9e
	I0229 02:42:12.065998    1532 command_runner.go:130] > 96810146c69c
	I0229 02:42:12.074515    1532 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 02:42:12.112852    1532 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:42:12.130884    1532 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0229 02:42:12.131596    1532 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0229 02:42:12.131596    1532 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0229 02:42:12.131681    1532 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:42:12.131992    1532 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:42:12.140696    1532 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:42:12.158321    1532 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 02:42:12.158321    1532 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:42:12.384565    1532 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:42:12.384565    1532 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0229 02:42:12.384565    1532 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0229 02:42:12.384565    1532 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 02:42:12.384565    1532 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0229 02:42:12.384565    1532 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0229 02:42:12.384565    1532 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0229 02:42:12.384565    1532 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0229 02:42:12.384565    1532 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0229 02:42:12.384565    1532 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 02:42:12.384565    1532 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 02:42:12.384565    1532 command_runner.go:130] > [certs] Using the existing "sa" key
	I0229 02:42:12.384795    1532 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:42:13.350432    1532 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:42:13.350432    1532 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:42:13.350432    1532 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:42:13.350432    1532 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:42:13.350897    1532 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:42:13.350897    1532 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:42:13.661809    1532 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:42:13.661949    1532 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:42:13.661949    1532 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0229 02:42:13.662077    1532 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:42:13.753413    1532 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:42:13.753727    1532 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:42:13.753786    1532 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:42:13.753786    1532 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:42:13.754133    1532 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:42:13.846582    1532 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:42:13.846738    1532 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:42:13.856567    1532 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:42:14.358672    1532 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:42:14.865645    1532 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:42:15.360809    1532 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:42:15.386716    1532 command_runner.go:130] > 1764
	I0229 02:42:15.386772    1532 api_server.go:72] duration metric: took 1.5399844s to wait for apiserver process to appear ...
	I0229 02:42:15.386772    1532 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:42:15.386772    1532 api_server.go:253] Checking apiserver healthz at https://172.19.2.238:8443/healthz ...
	I0229 02:42:18.776359    1532 api_server.go:279] https://172.19.2.238:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 02:42:18.776818    1532 api_server.go:103] status: https://172.19.2.238:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 02:42:18.776871    1532 api_server.go:253] Checking apiserver healthz at https://172.19.2.238:8443/healthz ...
	I0229 02:42:18.889474    1532 api_server.go:279] https://172.19.2.238:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 02:42:18.889720    1532 api_server.go:103] status: https://172.19.2.238:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 02:42:18.889720    1532 api_server.go:253] Checking apiserver healthz at https://172.19.2.238:8443/healthz ...
	I0229 02:42:18.974559    1532 api_server.go:279] https://172.19.2.238:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:42:18.974661    1532 api_server.go:103] status: https://172.19.2.238:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:42:19.394394    1532 api_server.go:253] Checking apiserver healthz at https://172.19.2.238:8443/healthz ...
	I0229 02:42:19.409232    1532 api_server.go:279] https://172.19.2.238:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:42:19.409404    1532 api_server.go:103] status: https://172.19.2.238:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:42:19.901628    1532 api_server.go:253] Checking apiserver healthz at https://172.19.2.238:8443/healthz ...
	I0229 02:42:19.920180    1532 api_server.go:279] https://172.19.2.238:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:42:19.920280    1532 api_server.go:103] status: https://172.19.2.238:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:42:20.391667    1532 api_server.go:253] Checking apiserver healthz at https://172.19.2.238:8443/healthz ...
	I0229 02:42:20.404616    1532 api_server.go:279] https://172.19.2.238:8443/healthz returned 200:
	ok
	I0229 02:42:20.404810    1532 round_trippers.go:463] GET https://172.19.2.238:8443/version
	I0229 02:42:20.404810    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:20.404810    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:20.404810    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:20.418470    1532 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0229 02:42:20.418470    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:20.419082    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:20.419082    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:20.419082    1532 round_trippers.go:580]     Content-Length: 264
	I0229 02:42:20.419082    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:20 GMT
	I0229 02:42:20.419082    1532 round_trippers.go:580]     Audit-Id: 6ed74524-14fd-4ef9-b17c-8ab10ae57111
	I0229 02:42:20.419082    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:20.419082    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:20.419082    1532 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0229 02:42:20.419082    1532 api_server.go:141] control plane version: v1.28.4
	I0229 02:42:20.419082    1532 api_server.go:131] duration metric: took 5.0320295s to wait for apiserver health ...
	I0229 02:42:20.419082    1532 cni.go:84] Creating CNI manager for ""
	I0229 02:42:20.419082    1532 cni.go:136] 2 nodes found, recommending kindnet
	I0229 02:42:20.420136    1532 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0229 02:42:20.431795    1532 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0229 02:42:20.440834    1532 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0229 02:42:20.440834    1532 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0229 02:42:20.440834    1532 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0229 02:42:20.440834    1532 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0229 02:42:20.440834    1532 command_runner.go:130] > Access: 2024-02-29 02:40:58.275316000 +0000
	I0229 02:42:20.440834    1532 command_runner.go:130] > Modify: 2024-02-23 03:39:37.000000000 +0000
	I0229 02:42:20.440834    1532 command_runner.go:130] > Change: 2024-02-29 02:40:46.412000000 +0000
	I0229 02:42:20.440834    1532 command_runner.go:130] >  Birth: -
	I0229 02:42:20.440834    1532 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0229 02:42:20.440834    1532 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0229 02:42:20.485530    1532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0229 02:42:21.725086    1532 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0229 02:42:21.725086    1532 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0229 02:42:21.725086    1532 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0229 02:42:21.725086    1532 command_runner.go:130] > daemonset.apps/kindnet configured
	I0229 02:42:21.725086    1532 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.2394879s)
	I0229 02:42:21.725086    1532 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:42:21.725086    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods
	I0229 02:42:21.725086    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:21.725086    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:21.725086    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:21.730077    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:21.731126    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:21.731126    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:21 GMT
	I0229 02:42:21.731126    1532 round_trippers.go:580]     Audit-Id: 60d19813-251a-4057-8cc9-ce80e3ba7d53
	I0229 02:42:21.731217    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:21.731217    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:21.731258    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:21.731258    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:21.732700    1532 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1924"},"items":[{"metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1910","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 70099 chars]
	I0229 02:42:21.741069    1532 system_pods.go:59] 10 kube-system pods found
	I0229 02:42:21.741069    1532 system_pods.go:61] "coredns-5dd5756b68-8g6tg" [ef7fb259-9f24-4645-9eff-2b16f6789e1b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 02:42:21.741069    1532 system_pods.go:61] "etcd-multinode-314500" [64dda041-1f1d-4866-aa39-62d21bd84e46] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 02:42:21.741069    1532 system_pods.go:61] "kindnet-6r7b8" [402c3ac1-05a9-45f1-aa7d-c0fb8ced6c87] Running
	I0229 02:42:21.741069    1532 system_pods.go:61] "kindnet-t9r77" [4620d417-744c-4049-82ab-79d1ee7f047c] Running
	I0229 02:42:21.741069    1532 system_pods.go:61] "kube-apiserver-multinode-314500" [baa3dc33-6d86-4748-9d57-c64f45dcfbf7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 02:42:21.741069    1532 system_pods.go:61] "kube-controller-manager-multinode-314500" [58e57902-e113-44a9-b5b5-4aba2ba13491] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 02:42:21.741069    1532 system_pods.go:61] "kube-proxy-4gbrl" [accb56cb-79ee-4f16-b05e-91bf554c4a60] Running
	I0229 02:42:21.741069    1532 system_pods.go:61] "kube-proxy-6r6j4" [2b84b22d-3786-4f9e-a23a-c7cfc93bb671] Running
	I0229 02:42:21.741069    1532 system_pods.go:61] "kube-scheduler-multinode-314500" [31fcecc6-17de-43a6-892d-37cd915de64b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 02:42:21.741069    1532 system_pods.go:61] "storage-provisioner" [9780520b-8ff9-408a-ab6f-41b63790ccd1] Running
	I0229 02:42:21.741069    1532 system_pods.go:74] duration metric: took 15.9813ms to wait for pod list to return data ...
	I0229 02:42:21.741069    1532 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:42:21.741069    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes
	I0229 02:42:21.741069    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:21.741069    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:21.741069    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:21.745063    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:21.745117    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:21.745117    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:21 GMT
	I0229 02:42:21.745117    1532 round_trippers.go:580]     Audit-Id: 3ee9f5f0-964d-4cb1-b6f4-93a2cfdfa963
	I0229 02:42:21.745117    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:21.745117    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:21.745117    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:21.745117    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:21.745117    1532 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1924"},"items":[{"metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1902","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 10212 chars]
	I0229 02:42:21.746310    1532 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:42:21.746310    1532 node_conditions.go:123] node cpu capacity is 2
	I0229 02:42:21.746310    1532 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:42:21.746310    1532 node_conditions.go:123] node cpu capacity is 2
	I0229 02:42:21.746310    1532 node_conditions.go:105] duration metric: took 5.2415ms to run NodePressure ...
	I0229 02:42:21.746310    1532 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:42:22.010570    1532 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0229 02:42:22.010667    1532 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0229 02:42:22.010845    1532 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0229 02:42:22.011117    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0229 02:42:22.011150    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:22.011150    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:22.011150    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:22.014930    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:22.014930    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:22.014930    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:22.014930    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:22 GMT
	I0229 02:42:22.014930    1532 round_trippers.go:580]     Audit-Id: 6aa9513e-4fa3-49ec-ad4f-5135a86e3028
	I0229 02:42:22.014930    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:22.014930    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:22.014930    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:22.016321    1532 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1926"},"items":[{"metadata":{"name":"etcd-multinode-314500","namespace":"kube-system","uid":"64dda041-1f1d-4866-aa39-62d21bd84e46","resourceVersion":"1914","creationTimestamp":"2024-02-29T02:42:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.2.238:2379","kubernetes.io/config.hash":"96721f37a1f14642fee9a072efcaa322","kubernetes.io/config.mirror":"96721f37a1f14642fee9a072efcaa322","kubernetes.io/config.seen":"2024-02-29T02:42:14.259019103Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:42:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 29323 chars]
	I0229 02:42:22.017782    1532 kubeadm.go:787] kubelet initialised
	I0229 02:42:22.017782    1532 kubeadm.go:788] duration metric: took 6.906ms waiting for restarted kubelet to initialise ...
	I0229 02:42:22.017857    1532 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:42:22.017933    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods
	I0229 02:42:22.017933    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:22.017933    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:22.017933    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:22.021133    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:22.022164    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:22.022164    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:22.022218    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:22 GMT
	I0229 02:42:22.022218    1532 round_trippers.go:580]     Audit-Id: c8e7b74e-3155-4ddf-a884-675cbb06e3a4
	I0229 02:42:22.022218    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:22.022218    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:22.022242    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:22.023167    1532 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1926"},"items":[{"metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1910","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 70099 chars]
	I0229 02:42:22.026314    1532 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace to be "Ready" ...
	I0229 02:42:22.026314    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:22.026314    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:22.026314    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:22.026314    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:22.038415    1532 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0229 02:42:22.038818    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:22.038852    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:22.038852    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:22.038852    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:22.038852    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:22.038852    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:22 GMT
	I0229 02:42:22.038852    1532 round_trippers.go:580]     Audit-Id: 28657b83-0d1a-4fc5-bbd0-3baa515d71b4
	I0229 02:42:22.039093    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1910","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0229 02:42:22.039849    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:22.039916    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:22.039916    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:22.039980    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:22.043936    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:22.044012    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:22.044012    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:22.044012    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:22.044012    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:22.044064    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:22 GMT
	I0229 02:42:22.044064    1532 round_trippers.go:580]     Audit-Id: 6c161273-6abc-42cd-bffa-b625994414cc
	I0229 02:42:22.044106    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:22.044106    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1902","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5363 chars]
	I0229 02:42:22.044836    1532 pod_ready.go:97] node "multinode-314500" hosting pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-314500" has status "Ready":"False"
	I0229 02:42:22.044878    1532 pod_ready.go:81] duration metric: took 18.5204ms waiting for pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace to be "Ready" ...
	E0229 02:42:22.044878    1532 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-314500" hosting pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-314500" has status "Ready":"False"
	I0229 02:42:22.044878    1532 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:42:22.045009    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-314500
	I0229 02:42:22.045009    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:22.045009    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:22.045009    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:22.047188    1532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:42:22.047188    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:22.047188    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:22.047188    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:22.047188    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:22.047188    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:22 GMT
	I0229 02:42:22.047188    1532 round_trippers.go:580]     Audit-Id: a254cea2-a8f3-4e08-bd95-f983a1439a59
	I0229 02:42:22.047188    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:22.048277    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-314500","namespace":"kube-system","uid":"64dda041-1f1d-4866-aa39-62d21bd84e46","resourceVersion":"1914","creationTimestamp":"2024-02-29T02:42:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.2.238:2379","kubernetes.io/config.hash":"96721f37a1f14642fee9a072efcaa322","kubernetes.io/config.mirror":"96721f37a1f14642fee9a072efcaa322","kubernetes.io/config.seen":"2024-02-29T02:42:14.259019103Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:42:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6077 chars]
	I0229 02:42:22.048805    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:22.048805    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:22.048805    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:22.048805    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:22.051068    1532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:42:22.051068    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:22.051068    1532 round_trippers.go:580]     Audit-Id: e067c836-c1c9-4ccd-a13a-50013a0c48c5
	I0229 02:42:22.051068    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:22.051068    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:22.051068    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:22.051068    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:22.051068    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:22 GMT
	I0229 02:42:22.051800    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1902","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5363 chars]
	I0229 02:42:22.052206    1532 pod_ready.go:97] node "multinode-314500" hosting pod "etcd-multinode-314500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-314500" has status "Ready":"False"
	I0229 02:42:22.052206    1532 pod_ready.go:81] duration metric: took 7.2935ms waiting for pod "etcd-multinode-314500" in "kube-system" namespace to be "Ready" ...
	E0229 02:42:22.052206    1532 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-314500" hosting pod "etcd-multinode-314500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-314500" has status "Ready":"False"
	I0229 02:42:22.052206    1532 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:42:22.052206    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-314500
	I0229 02:42:22.052206    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:22.052206    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:22.052206    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:22.054840    1532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:42:22.054840    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:22.054840    1532 round_trippers.go:580]     Audit-Id: 642c0bd5-feab-43d9-8f8c-367f0da4ceef
	I0229 02:42:22.054840    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:22.054840    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:22.054840    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:22.054840    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:22.054840    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:22 GMT
	I0229 02:42:22.055409    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-314500","namespace":"kube-system","uid":"baa3dc33-6d86-4748-9d57-c64f45dcfbf7","resourceVersion":"1915","creationTimestamp":"2024-02-29T02:42:19Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.2.238:8443","kubernetes.io/config.hash":"a317731b2b94a8e14311676e58d24e16","kubernetes.io/config.mirror":"a317731b2b94a8e14311676e58d24e16","kubernetes.io/config.seen":"2024-02-29T02:42:14.259032504Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:42:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7635 chars]
	I0229 02:42:22.056052    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:22.056052    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:22.056052    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:22.056133    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:22.058339    1532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:42:22.059048    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:22.059048    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:22.059048    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:22.059048    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:22 GMT
	I0229 02:42:22.059048    1532 round_trippers.go:580]     Audit-Id: d60dde48-e0d5-4615-878c-58128b9db24e
	I0229 02:42:22.059048    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:22.059048    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:22.059278    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1902","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5363 chars]
	I0229 02:42:22.059306    1532 pod_ready.go:97] node "multinode-314500" hosting pod "kube-apiserver-multinode-314500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-314500" has status "Ready":"False"
	I0229 02:42:22.059306    1532 pod_ready.go:81] duration metric: took 7.0997ms waiting for pod "kube-apiserver-multinode-314500" in "kube-system" namespace to be "Ready" ...
	E0229 02:42:22.059306    1532 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-314500" hosting pod "kube-apiserver-multinode-314500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-314500" has status "Ready":"False"
	I0229 02:42:22.059306    1532 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:42:22.059306    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-314500
	I0229 02:42:22.059306    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:22.059833    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:22.059833    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:22.065102    1532 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 02:42:22.065102    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:22.065102    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:22.065102    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:22 GMT
	I0229 02:42:22.065102    1532 round_trippers.go:580]     Audit-Id: a842636c-62ee-4fbf-bf84-919d77115bf5
	I0229 02:42:22.065102    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:22.065102    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:22.065102    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:22.065737    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-314500","namespace":"kube-system","uid":"58e57902-e113-44a9-b5b5-4aba2ba13491","resourceVersion":"1913","creationTimestamp":"2024-02-29T02:15:52Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"46f4a0cce9ca64e19c1ad09d6f30ce1e","kubernetes.io/config.mirror":"46f4a0cce9ca64e19c1ad09d6f30ce1e","kubernetes.io/config.seen":"2024-02-29T02:15:52.221398986Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:15:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7433 chars]
	I0229 02:42:22.135186    1532 request.go:629] Waited for 68.6354ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:22.135402    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:22.135402    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:22.135402    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:22.135402    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:22.143700    1532 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0229 02:42:22.144036    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:22.144036    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:22.144036    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:22.144036    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:22.144101    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:22.144101    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:22 GMT
	I0229 02:42:22.144101    1532 round_trippers.go:580]     Audit-Id: 8a1c7a53-a43b-48c8-8780-97b1b1df9400
	I0229 02:42:22.144433    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1902","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5363 chars]
	I0229 02:42:22.145578    1532 pod_ready.go:97] node "multinode-314500" hosting pod "kube-controller-manager-multinode-314500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-314500" has status "Ready":"False"
	I0229 02:42:22.146087    1532 pod_ready.go:81] duration metric: took 86.7764ms waiting for pod "kube-controller-manager-multinode-314500" in "kube-system" namespace to be "Ready" ...
	E0229 02:42:22.146151    1532 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-314500" hosting pod "kube-controller-manager-multinode-314500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-314500" has status "Ready":"False"
	I0229 02:42:22.146151    1532 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4gbrl" in "kube-system" namespace to be "Ready" ...
	I0229 02:42:22.339457    1532 request.go:629] Waited for 193.0973ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4gbrl
	I0229 02:42:22.339653    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4gbrl
	I0229 02:42:22.339653    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:22.339653    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:22.339653    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:22.345531    1532 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 02:42:22.345531    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:22.345531    1532 round_trippers.go:580]     Audit-Id: c7e4abd9-aa06-4b4b-998c-0c0417e17697
	I0229 02:42:22.345531    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:22.345879    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:22.345879    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:22.345879    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:22.345926    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:22 GMT
	I0229 02:42:22.346271    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4gbrl","generateName":"kube-proxy-","namespace":"kube-system","uid":"accb56cb-79ee-4f16-b05e-91bf554c4a60","resourceVersion":"1598","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"99934fe5-0d72-4e83-8f59-4a0b59969008","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"99934fe5-0d72-4e83-8f59-4a0b59969008\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5739 chars]
	I0229 02:42:22.525221    1532 request.go:629] Waited for 178.2303ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.238:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:42:22.525732    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:42:22.525732    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:22.525732    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:22.525833    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:22.529173    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:22.529337    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:22.529337    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:22.529337    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:22 GMT
	I0229 02:42:22.529337    1532 round_trippers.go:580]     Audit-Id: 2c65ff8c-9a66-4b3b-97a2-5ce0fc2d12b9
	I0229 02:42:22.529337    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:22.529337    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:22.529337    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:22.529337    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"2332789d-7280-427a-9644-fc1ffcfc737d","resourceVersion":"1763","creationTimestamp":"2024-02-29T02:35:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:35:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3803 chars]
	I0229 02:42:22.530067    1532 pod_ready.go:92] pod "kube-proxy-4gbrl" in "kube-system" namespace has status "Ready":"True"
	I0229 02:42:22.530103    1532 pod_ready.go:81] duration metric: took 383.8954ms waiting for pod "kube-proxy-4gbrl" in "kube-system" namespace to be "Ready" ...
	I0229 02:42:22.530103    1532 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6r6j4" in "kube-system" namespace to be "Ready" ...
	I0229 02:42:22.730232    1532 request.go:629] Waited for 199.9147ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6r6j4
	I0229 02:42:22.730232    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6r6j4
	I0229 02:42:22.730232    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:22.730232    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:22.730232    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:22.734013    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:22.734098    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:22.734098    1532 round_trippers.go:580]     Audit-Id: dc06718a-460e-4bb1-9493-c7273af18ac9
	I0229 02:42:22.734098    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:22.734098    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:22.734098    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:22.734098    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:22.734098    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:22 GMT
	I0229 02:42:22.734350    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6r6j4","generateName":"kube-proxy-","namespace":"kube-system","uid":"2b84b22d-3786-4f9e-a23a-c7cfc93bb671","resourceVersion":"1923","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"99934fe5-0d72-4e83-8f59-4a0b59969008","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"99934fe5-0d72-4e83-8f59-4a0b59969008\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5735 chars]
	I0229 02:42:22.932928    1532 request.go:629] Waited for 197.892ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:22.933019    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:22.933019    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:22.933019    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:22.933019    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:22.936446    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:22.936446    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:22.936446    1532 round_trippers.go:580]     Audit-Id: 27472a98-f820-4e7e-9e60-bfb3194a0861
	I0229 02:42:22.937170    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:22.937170    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:22.937170    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:22.937170    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:22.937170    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:23 GMT
	I0229 02:42:22.937395    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1902","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5363 chars]
	I0229 02:42:22.937782    1532 pod_ready.go:97] node "multinode-314500" hosting pod "kube-proxy-6r6j4" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-314500" has status "Ready":"False"
	I0229 02:42:22.937782    1532 pod_ready.go:81] duration metric: took 407.6565ms waiting for pod "kube-proxy-6r6j4" in "kube-system" namespace to be "Ready" ...
	E0229 02:42:22.937782    1532 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-314500" hosting pod "kube-proxy-6r6j4" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-314500" has status "Ready":"False"
	I0229 02:42:22.937782    1532 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:42:23.137129    1532 request.go:629] Waited for 199.3361ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-314500
	I0229 02:42:23.137497    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-314500
	I0229 02:42:23.137497    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:23.137497    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:23.137497    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:23.142072    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:23.142072    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:23.142072    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:23.142072    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:23.142072    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:23 GMT
	I0229 02:42:23.142072    1532 round_trippers.go:580]     Audit-Id: 80f19c24-7ea2-4cbc-8d4e-6b035c15a341
	I0229 02:42:23.142072    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:23.143095    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:23.143391    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-314500","namespace":"kube-system","uid":"31fcecc6-17de-43a6-892d-37cd915de64b","resourceVersion":"1912","creationTimestamp":"2024-02-29T02:15:52Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"3d9a79ff068a0922524863a8caa5053a","kubernetes.io/config.mirror":"3d9a79ff068a0922524863a8caa5053a","kubernetes.io/config.seen":"2024-02-29T02:15:52.221399886Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:15:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5145 chars]
	I0229 02:42:23.325677    1532 request.go:629] Waited for 181.4692ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:23.325842    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:23.325915    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:23.325915    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:23.325915    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:23.329726    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:23.330286    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:23.330286    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:23.330286    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:23.330286    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:23.330286    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:23 GMT
	I0229 02:42:23.330286    1532 round_trippers.go:580]     Audit-Id: 52c816cb-1c4b-4d28-affb-b8710b831e6e
	I0229 02:42:23.330286    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:23.330567    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1902","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5363 chars]
	I0229 02:42:23.331090    1532 pod_ready.go:97] node "multinode-314500" hosting pod "kube-scheduler-multinode-314500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-314500" has status "Ready":"False"
	I0229 02:42:23.331090    1532 pod_ready.go:81] duration metric: took 393.2863ms waiting for pod "kube-scheduler-multinode-314500" in "kube-system" namespace to be "Ready" ...
	E0229 02:42:23.331090    1532 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-314500" hosting pod "kube-scheduler-multinode-314500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-314500" has status "Ready":"False"
	I0229 02:42:23.331090    1532 pod_ready.go:38] duration metric: took 1.3131601s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:42:23.331196    1532 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 02:42:23.349657    1532 command_runner.go:130] > -16
	I0229 02:42:23.350152    1532 ops.go:34] apiserver oom_adj: -16
	I0229 02:42:23.350152    1532 kubeadm.go:640] restartCluster took 11.4364638s
	I0229 02:42:23.350152    1532 kubeadm.go:406] StartCluster complete in 11.5031419s
	I0229 02:42:23.350152    1532 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:42:23.350471    1532 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 02:42:23.351676    1532 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:42:23.353077    1532 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 02:42:23.353077    1532 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 02:42:23.354127    1532 out.go:177] * Enabled addons: 
	I0229 02:42:23.353471    1532 config.go:182] Loaded profile config "multinode-314500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 02:42:23.354676    1532 addons.go:505] enable addons completed in 1.823ms: enabled=[]
	I0229 02:42:23.364536    1532 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 02:42:23.365537    1532 kapi.go:59] client config for multinode-314500: &rest.Config{Host:"https://172.19.2.238:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-314500\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2480600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 02:42:23.366081    1532 cert_rotation.go:137] Starting client certificate rotation controller
	I0229 02:42:23.366081    1532 round_trippers.go:463] GET https://172.19.2.238:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0229 02:42:23.366081    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:23.366081    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:23.366081    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:23.380766    1532 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0229 02:42:23.380766    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:23.380766    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:23.380766    1532 round_trippers.go:580]     Content-Length: 292
	I0229 02:42:23.380766    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:23 GMT
	I0229 02:42:23.380766    1532 round_trippers.go:580]     Audit-Id: c8eb4613-a2b5-4a69-afd6-78803dddbef0
	I0229 02:42:23.380766    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:23.380766    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:23.380766    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:23.380766    1532 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b4cd7015-a823-43da-bf82-ae91c5678262","resourceVersion":"1925","creationTimestamp":"2024-02-29T02:15:51Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0229 02:42:23.380766    1532 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-314500" context rescaled to 1 replicas
	I0229 02:42:23.380766    1532 start.go:223] Will wait 6m0s for node &{Name: IP:172.19.2.238 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 02:42:23.381767    1532 out.go:177] * Verifying Kubernetes components...
	I0229 02:42:23.393463    1532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:42:23.489211    1532 command_runner.go:130] > apiVersion: v1
	I0229 02:42:23.489270    1532 command_runner.go:130] > data:
	I0229 02:42:23.489331    1532 command_runner.go:130] >   Corefile: |
	I0229 02:42:23.489331    1532 command_runner.go:130] >     .:53 {
	I0229 02:42:23.489331    1532 command_runner.go:130] >         log
	I0229 02:42:23.489331    1532 command_runner.go:130] >         errors
	I0229 02:42:23.489389    1532 command_runner.go:130] >         health {
	I0229 02:42:23.489389    1532 command_runner.go:130] >            lameduck 5s
	I0229 02:42:23.489389    1532 command_runner.go:130] >         }
	I0229 02:42:23.489389    1532 command_runner.go:130] >         ready
	I0229 02:42:23.489389    1532 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0229 02:42:23.489467    1532 command_runner.go:130] >            pods insecure
	I0229 02:42:23.489467    1532 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0229 02:42:23.489467    1532 command_runner.go:130] >            ttl 30
	I0229 02:42:23.489467    1532 command_runner.go:130] >         }
	I0229 02:42:23.489537    1532 command_runner.go:130] >         prometheus :9153
	I0229 02:42:23.489537    1532 command_runner.go:130] >         hosts {
	I0229 02:42:23.489537    1532 command_runner.go:130] >            172.19.0.1 host.minikube.internal
	I0229 02:42:23.489537    1532 command_runner.go:130] >            fallthrough
	I0229 02:42:23.489599    1532 command_runner.go:130] >         }
	I0229 02:42:23.489599    1532 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0229 02:42:23.489658    1532 command_runner.go:130] >            max_concurrent 1000
	I0229 02:42:23.489702    1532 command_runner.go:130] >         }
	I0229 02:42:23.489702    1532 command_runner.go:130] >         cache 30
	I0229 02:42:23.489743    1532 command_runner.go:130] >         loop
	I0229 02:42:23.489743    1532 command_runner.go:130] >         reload
	I0229 02:42:23.489789    1532 command_runner.go:130] >         loadbalance
	I0229 02:42:23.489829    1532 command_runner.go:130] >     }
	I0229 02:42:23.489829    1532 command_runner.go:130] > kind: ConfigMap
	I0229 02:42:23.489829    1532 command_runner.go:130] > metadata:
	I0229 02:42:23.489884    1532 command_runner.go:130] >   creationTimestamp: "2024-02-29T02:15:51Z"
	I0229 02:42:23.489884    1532 command_runner.go:130] >   name: coredns
	I0229 02:42:23.489929    1532 command_runner.go:130] >   namespace: kube-system
	I0229 02:42:23.489929    1532 command_runner.go:130] >   resourceVersion: "388"
	I0229 02:42:23.489979    1532 command_runner.go:130] >   uid: 3fc93d17-14a4-4d49-9f77-f2cd8adceaed
	I0229 02:42:23.490114    1532 node_ready.go:35] waiting up to 6m0s for node "multinode-314500" to be "Ready" ...
	I0229 02:42:23.490114    1532 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0229 02:42:23.529924    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:23.529995    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:23.529995    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:23.530050    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:23.533609    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:23.534454    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:23.534536    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:23.534536    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:23.534536    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:23 GMT
	I0229 02:42:23.534536    1532 round_trippers.go:580]     Audit-Id: 862974af-13fa-4a05-b555-4e71dc715f88
	I0229 02:42:23.534536    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:23.534536    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:23.534772    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1902","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5363 chars]
	I0229 02:42:23.998269    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:23.998346    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:23.998346    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:23.998346    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:24.002747    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:24.002747    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:24.002747    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:24.002747    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:24.002747    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:24.002841    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:24 GMT
	I0229 02:42:24.002841    1532 round_trippers.go:580]     Audit-Id: 59e18ea3-9875-4258-9a10-bef54a6f56dd
	I0229 02:42:24.002841    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:24.002841    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1902","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5363 chars]
	I0229 02:42:24.504043    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:24.504043    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:24.504043    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:24.504043    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:24.508326    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:24.508326    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:24.508326    1532 round_trippers.go:580]     Audit-Id: 7eac6d18-fde4-4810-9104-04c075b98e0e
	I0229 02:42:24.508326    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:24.508326    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:24.508326    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:24.508326    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:24.508326    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:24 GMT
	I0229 02:42:24.509269    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1902","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5363 chars]
	I0229 02:42:25.004336    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:25.004466    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:25.004466    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:25.004466    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:25.008653    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:25.008653    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:25.008653    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:25 GMT
	I0229 02:42:25.008653    1532 round_trippers.go:580]     Audit-Id: e316f9b3-0410-4ab1-987e-85c8a68567d1
	I0229 02:42:25.008653    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:25.008653    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:25.008653    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:25.008653    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:25.009389    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1902","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5363 chars]
	I0229 02:42:25.503455    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:25.503455    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:25.503455    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:25.503455    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:25.508188    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:25.508734    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:25.508734    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:25.508734    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:25 GMT
	I0229 02:42:25.508734    1532 round_trippers.go:580]     Audit-Id: 70605347-1d6e-4994-b67b-168436956b75
	I0229 02:42:25.508734    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:25.508734    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:25.508820    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:25.508980    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1902","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5363 chars]
	I0229 02:42:25.509085    1532 node_ready.go:58] node "multinode-314500" has status "Ready":"False"
	I0229 02:42:26.002981    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:26.002981    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:26.002981    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:26.002981    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:26.007390    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:26.007390    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:26.007390    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:26.007390    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:26.007390    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:26 GMT
	I0229 02:42:26.007390    1532 round_trippers.go:580]     Audit-Id: 1ded61c7-8344-414c-93c5-c7aa4655c793
	I0229 02:42:26.007390    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:26.007390    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:26.008752    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1902","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5363 chars]
	I0229 02:42:26.502677    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:26.502677    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:26.502677    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:26.502677    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:26.507399    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:26.507399    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:26.507399    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:26.507399    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:26.507399    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:26.507399    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:26 GMT
	I0229 02:42:26.507399    1532 round_trippers.go:580]     Audit-Id: 275954e1-e267-47b1-8de1-06409f9dc777
	I0229 02:42:26.507399    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:26.508273    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1902","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5363 chars]
	I0229 02:42:27.000090    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:27.000090    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:27.000192    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:27.000192    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:27.004613    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:27.004613    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:27.004696    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:27 GMT
	I0229 02:42:27.004696    1532 round_trippers.go:580]     Audit-Id: 47af4e67-b91f-4b04-af66-916b45057dad
	I0229 02:42:27.004696    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:27.004696    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:27.004696    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:27.004696    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:27.005145    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1902","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5363 chars]
	I0229 02:42:27.498722    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:27.498722    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:27.498722    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:27.498722    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:27.503044    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:27.503044    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:27.503044    1532 round_trippers.go:580]     Audit-Id: 0beaac30-6531-4e36-a77c-5a4c1f201f9c
	I0229 02:42:27.503393    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:27.503393    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:27.503393    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:27.503393    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:27.503393    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:27 GMT
	I0229 02:42:27.503803    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1902","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5363 chars]
	I0229 02:42:27.999550    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:27.999775    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:27.999775    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:27.999775    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:28.002880    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:28.003787    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:28.003787    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:28.003787    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:28.003787    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:28.003787    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:28.003787    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:28 GMT
	I0229 02:42:28.003787    1532 round_trippers.go:580]     Audit-Id: 417fbc2b-acf9-4a24-ab97-9645f8a68925
	I0229 02:42:28.004297    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1902","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5363 chars]
	I0229 02:42:28.004871    1532 node_ready.go:58] node "multinode-314500" has status "Ready":"False"
	I0229 02:42:28.501125    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:28.501215    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:28.501215    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:28.501215    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:28.506245    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:28.506245    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:28.506245    1532 round_trippers.go:580]     Audit-Id: 7601a368-12cb-4605-9dd1-7b8b7ac96907
	I0229 02:42:28.506245    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:28.506245    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:28.506245    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:28.506245    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:28.506245    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:28 GMT
	I0229 02:42:28.506245    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1902","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5363 chars]
	I0229 02:42:29.000605    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:29.000814    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:29.000814    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:29.000814    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:29.006123    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:29.006123    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:29.006123    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:29.006123    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:29.006123    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:29.006123    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:29 GMT
	I0229 02:42:29.006123    1532 round_trippers.go:580]     Audit-Id: 5b3c1156-f827-4416-83da-1ab62fce6470
	I0229 02:42:29.006123    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:29.006123    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"1902","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5363 chars]
	I0229 02:42:29.501803    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:29.501803    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:29.501803    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:29.501803    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:29.506266    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:29.506648    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:29.506648    1532 round_trippers.go:580]     Audit-Id: c7b406ea-c37a-4e32-ab6e-98f2c844d01f
	I0229 02:42:29.506648    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:29.506648    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:29.506648    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:29.506648    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:29.506648    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:29 GMT
	I0229 02:42:29.506886    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:29.507314    1532 node_ready.go:49] node "multinode-314500" has status "Ready":"True"
	I0229 02:42:29.507380    1532 node_ready.go:38] duration metric: took 6.0169309s waiting for node "multinode-314500" to be "Ready" ...
	I0229 02:42:29.507380    1532 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:42:29.507523    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods
	I0229 02:42:29.507594    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:29.507594    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:29.507594    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:29.512860    1532 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 02:42:29.512860    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:29.512860    1532 round_trippers.go:580]     Audit-Id: f3bd73d8-9ae2-4d3d-82db-c37f590b81aa
	I0229 02:42:29.512860    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:29.512860    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:29.512860    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:29.512860    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:29.512860    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:29 GMT
	I0229 02:42:29.514654    1532 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2004"},"items":[{"metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1910","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 70099 chars]
	I0229 02:42:29.517593    1532 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace to be "Ready" ...
	I0229 02:42:29.517746    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:29.517746    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:29.517746    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:29.517816    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:29.520487    1532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:42:29.520487    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:29.520487    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:29 GMT
	I0229 02:42:29.520487    1532 round_trippers.go:580]     Audit-Id: 3a690a89-6002-415c-8d4a-f87b6db67c13
	I0229 02:42:29.521126    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:29.521126    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:29.521126    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:29.521126    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:29.521321    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1910","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0229 02:42:29.522008    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:29.522008    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:29.522008    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:29.522072    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:29.524285    1532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:42:29.524285    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:29.525278    1532 round_trippers.go:580]     Audit-Id: 1f7863c5-64b9-46ae-b389-e3f3c5598660
	I0229 02:42:29.525278    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:29.525278    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:29.525278    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:29.525278    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:29.525278    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:29 GMT
	I0229 02:42:29.525773    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:30.019790    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:30.020098    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:30.020098    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:30.020098    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:30.024370    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:30.024761    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:30.024761    1532 round_trippers.go:580]     Audit-Id: 6346db92-6740-443a-9869-65aef1977379
	I0229 02:42:30.024761    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:30.024761    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:30.024761    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:30.024761    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:30.024839    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:30 GMT
	I0229 02:42:30.025000    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1910","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0229 02:42:30.025629    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:30.025739    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:30.025739    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:30.025739    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:30.029100    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:30.029399    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:30.029399    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:30.029399    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:30.029399    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:30.029399    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:30.029440    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:30 GMT
	I0229 02:42:30.029440    1532 round_trippers.go:580]     Audit-Id: 2f92f838-dd56-4242-9738-4c6904e99a84
	I0229 02:42:30.030285    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:30.520248    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:30.520326    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:30.520326    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:30.520326    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:30.524091    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:30.525147    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:30.525147    1532 round_trippers.go:580]     Audit-Id: 08c0e788-9a7e-4971-9d77-7d8299956494
	I0229 02:42:30.525147    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:30.525147    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:30.525147    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:30.525147    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:30.525147    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:30 GMT
	I0229 02:42:30.525794    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1910","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0229 02:42:30.526485    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:30.526485    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:30.526485    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:30.526485    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:30.530646    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:30.530646    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:30.530646    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:30.530646    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:30.530646    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:30.530646    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:30.530646    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:30 GMT
	I0229 02:42:30.530646    1532 round_trippers.go:580]     Audit-Id: 5884cbe8-7c78-4dc5-bf2a-98fef91872d8
	I0229 02:42:30.530646    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:31.021909    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:31.022164    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:31.022164    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:31.022164    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:31.030657    1532 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0229 02:42:31.030657    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:31.030657    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:31.030657    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:31.030657    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:31 GMT
	I0229 02:42:31.030657    1532 round_trippers.go:580]     Audit-Id: 8d2402d7-2652-42e8-8ab7-990335c3698b
	I0229 02:42:31.030657    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:31.030657    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:31.031486    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1910","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0229 02:42:31.032393    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:31.032463    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:31.032463    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:31.032494    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:31.035776    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:31.035921    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:31.035966    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:31 GMT
	I0229 02:42:31.035966    1532 round_trippers.go:580]     Audit-Id: de5d2437-e168-4a29-be3d-639a627ab403
	I0229 02:42:31.035966    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:31.035966    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:31.035966    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:31.035966    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:31.036246    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:31.524118    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:31.524118    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:31.524118    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:31.524118    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:31.528263    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:31.529075    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:31.529075    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:31.529195    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:31.529195    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:31 GMT
	I0229 02:42:31.529195    1532 round_trippers.go:580]     Audit-Id: fba87570-e18e-4ed5-8b45-b28fe236ff01
	I0229 02:42:31.529195    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:31.529195    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:31.529568    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1910","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0229 02:42:31.530467    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:31.530567    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:31.530600    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:31.530600    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:31.536861    1532 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 02:42:31.537014    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:31.537041    1532 round_trippers.go:580]     Audit-Id: 32984390-bfee-4275-a455-3fb34366d49f
	I0229 02:42:31.537041    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:31.537041    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:31.537041    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:31.537041    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:31.537098    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:31 GMT
	I0229 02:42:31.537098    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:31.537098    1532 pod_ready.go:102] pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace has status "Ready":"False"
	I0229 02:42:32.029786    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:32.029786    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:32.029786    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:32.029786    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:32.033662    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:32.033841    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:32.033841    1532 round_trippers.go:580]     Audit-Id: 7d7d392c-3c6a-4f38-8dfb-b5b932c569e9
	I0229 02:42:32.033841    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:32.033841    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:32.033841    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:32.033841    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:32.033841    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:32 GMT
	I0229 02:42:32.033841    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1910","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0229 02:42:32.034903    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:32.034963    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:32.034963    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:32.034963    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:32.038322    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:32.038322    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:32.038322    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:32.038322    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:32.038322    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:32.038322    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:32 GMT
	I0229 02:42:32.038322    1532 round_trippers.go:580]     Audit-Id: 81eb5156-9c6f-4dec-9354-09c6fa77ddbe
	I0229 02:42:32.038322    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:32.039710    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:32.531084    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:32.531154    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:32.531154    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:32.531154    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:32.534723    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:32.535740    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:32.535740    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:32.535740    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:32.535740    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:32 GMT
	I0229 02:42:32.535740    1532 round_trippers.go:580]     Audit-Id: dd7b7952-4800-41b2-8403-410a2000bec5
	I0229 02:42:32.535740    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:32.535740    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:32.536001    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1910","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0229 02:42:32.536715    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:32.536715    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:32.536715    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:32.536715    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:32.539187    1532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:42:32.540291    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:32.540291    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:32.540291    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:32.540291    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:32.540291    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:32.540291    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:32 GMT
	I0229 02:42:32.540291    1532 round_trippers.go:580]     Audit-Id: fcdb4160-096c-4fe7-9717-4c89d05bc4ea
	I0229 02:42:32.540568    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:33.032248    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:33.032371    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:33.032371    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:33.032371    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:33.039719    1532 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 02:42:33.039809    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:33.039809    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:33.039847    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:33 GMT
	I0229 02:42:33.039847    1532 round_trippers.go:580]     Audit-Id: 87bed496-4ded-41f2-a9ec-d02cf0b76476
	I0229 02:42:33.039847    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:33.039847    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:33.039847    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:33.040100    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1910","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0229 02:42:33.040299    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:33.040299    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:33.040299    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:33.040299    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:33.045103    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:33.045240    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:33.045240    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:33.045240    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:33.045240    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:33 GMT
	I0229 02:42:33.045240    1532 round_trippers.go:580]     Audit-Id: 450ab1d0-e519-4f5d-84a1-a1d8355adf3b
	I0229 02:42:33.045296    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:33.045296    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:33.045296    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:33.518802    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:33.518899    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:33.518899    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:33.518985    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:33.522257    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:33.522257    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:33.523290    1532 round_trippers.go:580]     Audit-Id: c047f467-224d-4666-a078-e0a25b3b53df
	I0229 02:42:33.523290    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:33.523290    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:33.523290    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:33.523352    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:33.523352    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:33 GMT
	I0229 02:42:33.523631    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1910","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0229 02:42:33.524477    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:33.524477    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:33.524567    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:33.524567    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:33.527742    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:33.528276    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:33.528276    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:33 GMT
	I0229 02:42:33.528276    1532 round_trippers.go:580]     Audit-Id: ac79d203-bdaa-476a-8f2e-daacb125a68b
	I0229 02:42:33.528276    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:33.528276    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:33.528276    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:33.528276    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:33.529115    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:34.021701    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:34.021701    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:34.021701    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:34.021701    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:34.026232    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:34.026687    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:34.026687    1532 round_trippers.go:580]     Audit-Id: c70140be-5ead-4862-b127-051263028635
	I0229 02:42:34.026687    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:34.026687    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:34.026687    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:34.026687    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:34.026687    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:34 GMT
	I0229 02:42:34.026924    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1910","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0229 02:42:34.027672    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:34.027672    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:34.027737    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:34.027737    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:34.030873    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:34.030873    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:34.030873    1532 round_trippers.go:580]     Audit-Id: 3e6de93e-a219-410c-9951-273886722341
	I0229 02:42:34.030873    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:34.030873    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:34.030873    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:34.030873    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:34.031549    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:34 GMT
	I0229 02:42:34.031832    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:34.032241    1532 pod_ready.go:102] pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace has status "Ready":"False"
	I0229 02:42:34.524030    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:34.524083    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:34.524083    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:34.524083    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:34.527411    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:34.527411    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:34.527411    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:34.527411    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:34.527411    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:34.527411    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:34 GMT
	I0229 02:42:34.527411    1532 round_trippers.go:580]     Audit-Id: b9142a36-7c00-4bab-952d-6b9f7cb79e29
	I0229 02:42:34.527411    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:34.527411    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1910","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0229 02:42:34.528434    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:34.528434    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:34.528434    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:34.528434    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:34.535267    1532 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 02:42:34.535267    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:34.535267    1532 round_trippers.go:580]     Audit-Id: 2f08c29d-179a-4dce-b0af-993dade1f3e7
	I0229 02:42:34.535267    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:34.535267    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:34.535267    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:34.535267    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:34.535267    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:34 GMT
	I0229 02:42:34.535836    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:35.030605    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:35.030819    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:35.030819    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:35.030819    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:35.035127    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:35.035492    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:35.035492    1532 round_trippers.go:580]     Audit-Id: 396c49e8-19e3-4255-b7d0-3a1f3a00c8ec
	I0229 02:42:35.035492    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:35.035492    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:35.035492    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:35.035492    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:35.035602    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:35 GMT
	I0229 02:42:35.035807    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1910","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0229 02:42:35.036622    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:35.036694    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:35.036694    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:35.036694    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:35.039545    1532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:42:35.040548    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:35.040589    1532 round_trippers.go:580]     Audit-Id: c601e285-8c60-497c-b518-7a3a106f2fee
	I0229 02:42:35.040589    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:35.040589    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:35.040589    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:35.040589    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:35.040589    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:35 GMT
	I0229 02:42:35.041446    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:35.521774    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:35.521774    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:35.521774    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:35.521774    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:35.528372    1532 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 02:42:35.528372    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:35.528372    1532 round_trippers.go:580]     Audit-Id: b157fd61-caaa-480c-bd98-45de216fa95b
	I0229 02:42:35.528372    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:35.528372    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:35.528372    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:35.528372    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:35.528372    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:35 GMT
	I0229 02:42:35.529135    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1910","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0229 02:42:35.529288    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:35.529288    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:35.529288    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:35.529826    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:35.534024    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:35.534024    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:35.534024    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:35.535019    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:35.535019    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:35 GMT
	I0229 02:42:35.535019    1532 round_trippers.go:580]     Audit-Id: 90c35cba-44f9-4229-8415-bb2ea1b87a4f
	I0229 02:42:35.535019    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:35.535019    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:35.535019    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:36.029046    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:36.029046    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:36.029046    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:36.029046    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:36.036630    1532 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 02:42:36.036630    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:36.036630    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:36.036630    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:36.036630    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:36.037243    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:36 GMT
	I0229 02:42:36.037243    1532 round_trippers.go:580]     Audit-Id: 58ede345-7c6c-46d1-b459-12404cf70f2c
	I0229 02:42:36.037243    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:36.037540    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1910","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0229 02:42:36.038405    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:36.038478    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:36.038478    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:36.038478    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:36.041883    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:36.041940    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:36.041940    1532 round_trippers.go:580]     Audit-Id: f3e4f74c-7f1f-4713-9415-b3c64b8aa292
	I0229 02:42:36.041940    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:36.041940    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:36.041940    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:36.041940    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:36.041940    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:36 GMT
	I0229 02:42:36.041940    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:36.042555    1532 pod_ready.go:102] pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace has status "Ready":"False"
	I0229 02:42:36.528946    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:36.529029    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:36.529114    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:36.529114    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:36.534550    1532 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 02:42:36.535312    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:36.535312    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:36.535312    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:36.535312    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:36 GMT
	I0229 02:42:36.535312    1532 round_trippers.go:580]     Audit-Id: 1e8e32f9-7249-4e56-b511-dae41b4c6157
	I0229 02:42:36.535384    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:36.535384    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:36.535384    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"1910","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6544 chars]
	I0229 02:42:36.536307    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:36.536307    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:36.536387    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:36.536387    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:36.540591    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:36.540591    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:36.540591    1532 round_trippers.go:580]     Audit-Id: bfb336af-faee-4b1c-9809-ea5acbe6bba0
	I0229 02:42:36.540591    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:36.540591    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:36.540591    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:36.540591    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:36.540591    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:36 GMT
	I0229 02:42:36.541713    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:37.019484    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:37.019484    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:37.019484    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:37.019484    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:37.023632    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:37.023632    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:37.023632    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:37.023632    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:37.023632    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:37.023632    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:37 GMT
	I0229 02:42:37.023632    1532 round_trippers.go:580]     Audit-Id: 8b1fc6ab-9730-4d9b-a6ff-bfdde8a3d806
	I0229 02:42:37.023632    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:37.023872    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"2031","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:42:37.024515    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:37.024589    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:37.024589    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:37.024589    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:37.031940    1532 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 02:42:37.031940    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:37.031940    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:37.031940    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:37.031940    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:37 GMT
	I0229 02:42:37.031940    1532 round_trippers.go:580]     Audit-Id: 21d8f902-efbd-41e9-9ad0-eb61b3d23b7c
	I0229 02:42:37.031940    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:37.031940    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:37.033437    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:37.519675    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:37.519754    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:37.519823    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:37.519823    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:37.524898    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:37.524898    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:37.524898    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:37.525018    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:37.525018    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:37.525018    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:37 GMT
	I0229 02:42:37.525018    1532 round_trippers.go:580]     Audit-Id: 16c8e1f2-72a2-44c0-81fe-aa312a1ca737
	I0229 02:42:37.525018    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:37.525098    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"2031","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:42:37.526673    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:37.526778    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:37.526778    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:37.526778    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:37.530156    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:37.530242    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:37.530242    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:37.530242    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:37.530242    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:37.530242    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:37.530242    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:37 GMT
	I0229 02:42:37.530242    1532 round_trippers.go:580]     Audit-Id: e52216bf-67ea-48c2-8659-a24678cb5ce9
	I0229 02:42:37.530310    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:38.030928    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:38.030928    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:38.030928    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:38.031008    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:38.035661    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:38.036008    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:38.036008    1532 round_trippers.go:580]     Audit-Id: 1a4a89bf-9660-4e3e-b504-e85e3fe404fa
	I0229 02:42:38.036008    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:38.036008    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:38.036008    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:38.036008    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:38.036008    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:38 GMT
	I0229 02:42:38.036104    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"2031","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:42:38.036853    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:38.036853    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:38.036853    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:38.036853    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:38.042943    1532 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 02:42:38.042943    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:38.042943    1532 round_trippers.go:580]     Audit-Id: c9f62ff4-fc90-482d-96d1-6290e4b39646
	I0229 02:42:38.042943    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:38.042943    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:38.042943    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:38.042943    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:38.042943    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:38 GMT
	I0229 02:42:38.042943    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:38.044014    1532 pod_ready.go:102] pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace has status "Ready":"False"
	I0229 02:42:38.529355    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:38.529355    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:38.529355    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:38.529355    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:38.533944    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:38.533944    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:38.533944    1532 round_trippers.go:580]     Audit-Id: e33e9a76-c8f5-4bbd-868c-f0aec7fe2878
	I0229 02:42:38.533944    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:38.533944    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:38.533944    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:38.533944    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:38.533944    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:38 GMT
	I0229 02:42:38.533944    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"2031","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:42:38.535572    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:38.535572    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:38.535572    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:38.535572    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:38.542454    1532 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 02:42:38.542454    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:38.542454    1532 round_trippers.go:580]     Audit-Id: 51086a11-9c55-4d95-85f4-58c0d963ec3d
	I0229 02:42:38.542454    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:38.542454    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:38.542454    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:38.542454    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:38.542454    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:38 GMT
	I0229 02:42:38.542454    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:39.030447    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:39.030447    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:39.030447    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:39.030447    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:39.039091    1532 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0229 02:42:39.039091    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:39.039091    1532 round_trippers.go:580]     Audit-Id: ce584817-f37f-4215-aecf-fc5a309af975
	I0229 02:42:39.039091    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:39.039091    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:39.039091    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:39.039091    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:39.039091    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:39 GMT
	I0229 02:42:39.040056    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"2031","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:42:39.040743    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:39.040773    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:39.040816    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:39.040816    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:39.044050    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:39.044050    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:39.044050    1532 round_trippers.go:580]     Audit-Id: d08d8fc9-4488-470a-aeab-1e2e99ed1321
	I0229 02:42:39.044050    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:39.044050    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:39.044050    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:39.044050    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:39.044050    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:39 GMT
	I0229 02:42:39.044050    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:39.531894    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:39.531991    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:39.531991    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:39.531991    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:39.536240    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:39.536240    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:39.536240    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:39.536240    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:39.536240    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:39 GMT
	I0229 02:42:39.536240    1532 round_trippers.go:580]     Audit-Id: ebaa88d6-8655-4a5f-823e-98bc188656b9
	I0229 02:42:39.536240    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:39.536240    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:39.536240    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"2031","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:42:39.537421    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:39.537421    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:39.537511    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:39.537511    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:39.541717    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:39.541717    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:39.542338    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:39 GMT
	I0229 02:42:39.542338    1532 round_trippers.go:580]     Audit-Id: dfa7c0d0-2580-4b87-a71a-2241acd67772
	I0229 02:42:39.542338    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:39.542338    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:39.542338    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:39.542338    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:39.542606    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:40.030616    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:40.030691    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:40.030691    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:40.030691    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:40.035389    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:40.036010    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:40.036010    1532 round_trippers.go:580]     Audit-Id: 771cb54f-07c0-4668-b4c0-cd16519abad3
	I0229 02:42:40.036010    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:40.036010    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:40.036010    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:40.036010    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:40.036010    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:40 GMT
	I0229 02:42:40.036079    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"2031","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:42:40.036952    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:40.036952    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:40.036952    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:40.036952    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:40.040210    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:40.040210    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:40.040210    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:40.040388    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:40.040388    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:40.040388    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:40.040388    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:40 GMT
	I0229 02:42:40.040388    1532 round_trippers.go:580]     Audit-Id: 6cca2caf-9345-4430-93c3-659dbda40622
	I0229 02:42:40.040628    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:40.532692    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:40.532783    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:40.532783    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:40.532872    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:40.541137    1532 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0229 02:42:40.541137    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:40.541137    1532 round_trippers.go:580]     Audit-Id: bd5c62d2-b684-48f3-bae3-9e6e2223253c
	I0229 02:42:40.541137    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:40.541137    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:40.541137    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:40.541137    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:40.541137    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:40 GMT
	I0229 02:42:40.541137    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"2031","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:42:40.542766    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:40.542876    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:40.542876    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:40.542924    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:40.546144    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:40.546144    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:40.546144    1532 round_trippers.go:580]     Audit-Id: a5183745-8f41-4a9c-94db-83112b9cf49d
	I0229 02:42:40.546144    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:40.546144    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:40.546144    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:40.546385    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:40.546385    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:40 GMT
	I0229 02:42:40.546385    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:40.547186    1532 pod_ready.go:102] pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace has status "Ready":"False"
	I0229 02:42:41.032638    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:41.032801    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:41.032801    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:41.032801    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:41.036687    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:41.036687    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:41.036687    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:41.036687    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:41.037192    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:41.037192    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:41 GMT
	I0229 02:42:41.037192    1532 round_trippers.go:580]     Audit-Id: 80ed4ff2-7723-4c13-b7bc-08042779f65d
	I0229 02:42:41.037192    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:41.037192    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"2031","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:42:41.038162    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:41.038235    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:41.038235    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:41.038235    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:41.042811    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:41.043200    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:41.043200    1532 round_trippers.go:580]     Audit-Id: 4276d5fa-8cdd-4ea3-8b9f-fe30327d82b6
	I0229 02:42:41.043200    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:41.043200    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:41.043200    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:41.043200    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:41.043200    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:41 GMT
	I0229 02:42:41.043555    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:41.533653    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:41.533653    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:41.533653    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:41.533653    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:41.538663    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:41.538663    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:41.538663    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:41 GMT
	I0229 02:42:41.538663    1532 round_trippers.go:580]     Audit-Id: 3f2def52-2e7e-4353-9ada-d3ad35c7461c
	I0229 02:42:41.538663    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:41.538768    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:41.538768    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:41.538768    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:41.538988    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"2031","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:42:41.539742    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:41.539742    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:41.539742    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:41.539742    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:41.543811    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:41.543811    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:41.543811    1532 round_trippers.go:580]     Audit-Id: 0fd88b12-29b9-4a0e-a7f3-40debbf2b3ba
	I0229 02:42:41.543811    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:41.543811    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:41.543811    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:41.544264    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:41.544264    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:41 GMT
	I0229 02:42:41.544341    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:42.034997    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:42.035093    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:42.035093    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:42.035093    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:42.038458    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:42.038458    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:42.039468    1532 round_trippers.go:580]     Audit-Id: a2698305-b046-4012-8e87-8bf79993d2c6
	I0229 02:42:42.039468    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:42.039580    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:42.039580    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:42.039580    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:42.039580    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:42 GMT
	I0229 02:42:42.039792    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"2031","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:42:42.040437    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:42.040508    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:42.040508    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:42.040508    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:42.043800    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:42.044048    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:42.044048    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:42.044048    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:42.044048    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:42.044048    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:42.044048    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:42 GMT
	I0229 02:42:42.044165    1532 round_trippers.go:580]     Audit-Id: 6fae7c0f-f838-4dfd-9a12-912702dc137e
	I0229 02:42:42.044394    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:42.522752    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:42.522752    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:42.522752    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:42.522752    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:42.528729    1532 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 02:42:42.528729    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:42.528729    1532 round_trippers.go:580]     Audit-Id: 9f9f6ff5-a0c3-488a-ad5f-a7655ec073ad
	I0229 02:42:42.528729    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:42.528729    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:42.528729    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:42.528729    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:42.528729    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:42 GMT
	I0229 02:42:42.528729    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"2031","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:42:42.529970    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:42.529970    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:42.529970    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:42.529970    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:42.533380    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:42.534356    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:42.534356    1532 round_trippers.go:580]     Audit-Id: 9cbe03ff-dc1e-41e1-8605-8726572de6e0
	I0229 02:42:42.534356    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:42.534356    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:42.534356    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:42.534356    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:42.534356    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:42 GMT
	I0229 02:42:42.534356    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:43.022026    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:43.022284    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:43.022284    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:43.022284    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:43.027364    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:43.027415    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:43.027415    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:43.027415    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:43.027415    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:43 GMT
	I0229 02:42:43.027415    1532 round_trippers.go:580]     Audit-Id: 2903b758-0f7f-44bc-bb04-0f14ece6cd23
	I0229 02:42:43.027415    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:43.027415    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:43.027618    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"2031","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:42:43.028294    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:43.028294    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:43.028294    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:43.028294    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:43.032570    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:43.032570    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:43.032781    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:43.032781    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:43.032781    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:43.032781    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:43.032781    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:43 GMT
	I0229 02:42:43.032781    1532 round_trippers.go:580]     Audit-Id: 47db3b12-14e8-45ad-b614-b9128371f4f3
	I0229 02:42:43.032915    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:43.032915    1532 pod_ready.go:102] pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace has status "Ready":"False"
	I0229 02:42:43.519347    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:43.519347    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:43.519347    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:43.519347    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:43.523542    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:43.523877    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:43.523877    1532 round_trippers.go:580]     Audit-Id: 54d19cfe-c9a2-4fe8-8bd0-0c45ab35b9b8
	I0229 02:42:43.523877    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:43.523877    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:43.523877    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:43.523877    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:43.523877    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:43 GMT
	I0229 02:42:43.524115    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"2031","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:42:43.524729    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:43.524729    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:43.524729    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:43.524821    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:43.532692    1532 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 02:42:43.532692    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:43.532692    1532 round_trippers.go:580]     Audit-Id: b4c6fd04-fe2b-46ac-9940-8d06356a4d45
	I0229 02:42:43.532692    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:43.532692    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:43.532692    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:43.532692    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:43.532692    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:43 GMT
	I0229 02:42:43.533732    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:44.021540    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:44.021540    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:44.021540    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:44.021540    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:44.026106    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:44.026106    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:44.026106    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:44.026106    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:44.026106    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:44.026106    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:44 GMT
	I0229 02:42:44.026106    1532 round_trippers.go:580]     Audit-Id: eeba041c-3fd7-4a97-b248-0688dfba7107
	I0229 02:42:44.026106    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:44.026314    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"2031","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:42:44.027837    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:44.027925    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:44.027925    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:44.027925    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:44.032048    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:44.032048    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:44.032048    1532 round_trippers.go:580]     Audit-Id: 41d66d1a-610d-46bc-a671-5e494162c854
	I0229 02:42:44.032048    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:44.032117    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:44.032117    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:44.032117    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:44.032117    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:44 GMT
	I0229 02:42:44.032318    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:44.521256    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:44.521256    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:44.521256    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:44.521256    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:44.525906    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:44.525906    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:44.525906    1532 round_trippers.go:580]     Audit-Id: 436534d0-e307-4d91-b0c0-87f8ad073da8
	I0229 02:42:44.525906    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:44.525906    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:44.525906    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:44.525906    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:44.526074    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:44 GMT
	I0229 02:42:44.526284    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"2031","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:42:44.527011    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:44.527011    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:44.527011    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:44.527011    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:44.531019    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:44.531019    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:44.531019    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:44.531019    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:44 GMT
	I0229 02:42:44.531019    1532 round_trippers.go:580]     Audit-Id: 2dd9544f-0e4e-4fa1-8a03-2d17d422c845
	I0229 02:42:44.531019    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:44.531019    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:44.531019    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:44.531889    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:45.026958    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:45.026958    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:45.026958    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:45.026958    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:45.031368    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:45.031405    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:45.031405    1532 round_trippers.go:580]     Audit-Id: 0b8fea62-3449-4646-a7bf-807f4343e251
	I0229 02:42:45.031405    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:45.031405    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:45.031405    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:45.031486    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:45.031486    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:45 GMT
	I0229 02:42:45.031604    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"2031","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6721 chars]
	I0229 02:42:45.032373    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:45.032440    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:45.032440    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:45.032440    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:45.036779    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:45.038215    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:45.038215    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:45 GMT
	I0229 02:42:45.038279    1532 round_trippers.go:580]     Audit-Id: 409e3d91-3cf5-4c78-a141-1d33c3c29618
	I0229 02:42:45.038279    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:45.038279    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:45.038426    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:45.038464    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:45.038717    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:45.039256    1532 pod_ready.go:102] pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace has status "Ready":"False"
	I0229 02:42:45.519371    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8g6tg
	I0229 02:42:45.519371    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:45.519371    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:45.519485    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:45.525854    1532 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 02:42:45.525854    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:45.525854    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:45.525854    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:45.525854    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:45.525854    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:45.525854    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:45 GMT
	I0229 02:42:45.525854    1532 round_trippers.go:580]     Audit-Id: f44e01be-ce76-4fd3-96de-b3cfa7e37ea0
	I0229 02:42:45.526540    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"2045","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6492 chars]
	I0229 02:42:45.527440    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:45.527440    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:45.527440    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:45.527440    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:45.531140    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:45.531140    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:45.531140    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:45.531140    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:45.531140    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:45.531140    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:45 GMT
	I0229 02:42:45.531140    1532 round_trippers.go:580]     Audit-Id: a78856b0-105e-4619-a5e8-dd84e8dfbefb
	I0229 02:42:45.531140    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:45.531140    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:45.531140    1532 pod_ready.go:92] pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace has status "Ready":"True"
	I0229 02:42:45.531140    1532 pod_ready.go:81] duration metric: took 16.0126552s waiting for pod "coredns-5dd5756b68-8g6tg" in "kube-system" namespace to be "Ready" ...
	I0229 02:42:45.531140    1532 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:42:45.531140    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-314500
	I0229 02:42:45.531140    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:45.531140    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:45.531140    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:45.535519    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:45.535519    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:45.535519    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:45.535519    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:45.535519    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:45 GMT
	I0229 02:42:45.535519    1532 round_trippers.go:580]     Audit-Id: 8567707e-fe98-4ad8-b2ca-3bc0079b1807
	I0229 02:42:45.535519    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:45.535519    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:45.535519    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-314500","namespace":"kube-system","uid":"64dda041-1f1d-4866-aa39-62d21bd84e46","resourceVersion":"2022","creationTimestamp":"2024-02-29T02:42:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.2.238:2379","kubernetes.io/config.hash":"96721f37a1f14642fee9a072efcaa322","kubernetes.io/config.mirror":"96721f37a1f14642fee9a072efcaa322","kubernetes.io/config.seen":"2024-02-29T02:42:14.259019103Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:42:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5853 chars]
	I0229 02:42:45.536545    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:45.536545    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:45.536545    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:45.536545    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:45.540166    1532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:42:45.540166    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:45.540166    1532 round_trippers.go:580]     Audit-Id: 7d770b43-824f-49f7-a8f9-72a750430413
	I0229 02:42:45.540166    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:45.540241    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:45.540241    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:45.540241    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:45.540241    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:45 GMT
	I0229 02:42:45.540435    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:45.540883    1532 pod_ready.go:92] pod "etcd-multinode-314500" in "kube-system" namespace has status "Ready":"True"
	I0229 02:42:45.540949    1532 pod_ready.go:81] duration metric: took 9.7432ms waiting for pod "etcd-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:42:45.540949    1532 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:42:45.541020    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-314500
	I0229 02:42:45.541020    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:45.541091    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:45.541091    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:45.543540    1532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:42:45.543540    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:45.543540    1532 round_trippers.go:580]     Audit-Id: 7b95ed88-3bdc-494c-b565-af3b814a4a52
	I0229 02:42:45.543540    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:45.543540    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:45.543540    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:45.543540    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:45.543540    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:45 GMT
	I0229 02:42:45.543540    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-314500","namespace":"kube-system","uid":"baa3dc33-6d86-4748-9d57-c64f45dcfbf7","resourceVersion":"2019","creationTimestamp":"2024-02-29T02:42:19Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.2.238:8443","kubernetes.io/config.hash":"a317731b2b94a8e14311676e58d24e16","kubernetes.io/config.mirror":"a317731b2b94a8e14311676e58d24e16","kubernetes.io/config.seen":"2024-02-29T02:42:14.259032504Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:42:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7391 chars]
	I0229 02:42:45.543540    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:45.543540    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:45.543540    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:45.543540    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:45.547201    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:45.547969    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:45.547969    1532 round_trippers.go:580]     Audit-Id: 6aa920d5-9bfc-49c3-9686-4e96bc639a85
	I0229 02:42:45.547969    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:45.547969    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:45.547969    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:45.547969    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:45.547969    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:45 GMT
	I0229 02:42:45.548186    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:45.548218    1532 pod_ready.go:92] pod "kube-apiserver-multinode-314500" in "kube-system" namespace has status "Ready":"True"
	I0229 02:42:45.548218    1532 pod_ready.go:81] duration metric: took 7.2686ms waiting for pod "kube-apiserver-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:42:45.548218    1532 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:42:45.548745    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-314500
	I0229 02:42:45.548785    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:45.548785    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:45.548785    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:45.550965    1532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:42:45.550965    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:45.550965    1532 round_trippers.go:580]     Audit-Id: 49d8abd2-c8d2-49a8-906c-fdeea0174ee6
	I0229 02:42:45.550965    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:45.550965    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:45.550965    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:45.550965    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:45.550965    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:45 GMT
	I0229 02:42:45.552055    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-314500","namespace":"kube-system","uid":"58e57902-e113-44a9-b5b5-4aba2ba13491","resourceVersion":"2021","creationTimestamp":"2024-02-29T02:15:52Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"46f4a0cce9ca64e19c1ad09d6f30ce1e","kubernetes.io/config.mirror":"46f4a0cce9ca64e19c1ad09d6f30ce1e","kubernetes.io/config.seen":"2024-02-29T02:15:52.221398986Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:15:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7171 chars]
	I0229 02:42:45.552589    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:45.552589    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:45.552589    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:45.552589    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:45.554828    1532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:42:45.554828    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:45.554828    1532 round_trippers.go:580]     Audit-Id: 17d2b563-9991-4d83-935e-58c9edfdd70f
	I0229 02:42:45.554828    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:45.554828    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:45.554828    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:45.554828    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:45.554828    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:45 GMT
	I0229 02:42:45.554828    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:45.554828    1532 pod_ready.go:92] pod "kube-controller-manager-multinode-314500" in "kube-system" namespace has status "Ready":"True"
	I0229 02:42:45.554828    1532 pod_ready.go:81] duration metric: took 6.6098ms waiting for pod "kube-controller-manager-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:42:45.554828    1532 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4gbrl" in "kube-system" namespace to be "Ready" ...
	I0229 02:42:45.554828    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4gbrl
	I0229 02:42:45.554828    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:45.554828    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:45.554828    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:45.559164    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:45.559164    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:45.559164    1532 round_trippers.go:580]     Audit-Id: 5505ef6f-d501-489a-9739-b14fe17d3c28
	I0229 02:42:45.559164    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:45.559164    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:45.559164    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:45.559164    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:45.559164    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:45 GMT
	I0229 02:42:45.559164    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4gbrl","generateName":"kube-proxy-","namespace":"kube-system","uid":"accb56cb-79ee-4f16-b05e-91bf554c4a60","resourceVersion":"1598","creationTimestamp":"2024-02-29T02:18:53Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"99934fe5-0d72-4e83-8f59-4a0b59969008","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:18:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"99934fe5-0d72-4e83-8f59-4a0b59969008\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5739 chars]
	I0229 02:42:45.560232    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500-m02
	I0229 02:42:45.560232    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:45.560232    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:45.560232    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:45.562480    1532 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 02:42:45.562794    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:45.562794    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:45.562794    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:45.562794    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:45.562794    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:45 GMT
	I0229 02:42:45.562794    1532 round_trippers.go:580]     Audit-Id: aa713fa4-7805-41f9-9c5b-de1e783a6770
	I0229 02:42:45.562794    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:45.562937    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500-m02","uid":"2332789d-7280-427a-9644-fc1ffcfc737d","resourceVersion":"1763","creationTimestamp":"2024-02-29T02:35:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T02_37_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:35:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3803 chars]
	I0229 02:42:45.563175    1532 pod_ready.go:92] pod "kube-proxy-4gbrl" in "kube-system" namespace has status "Ready":"True"
	I0229 02:42:45.563277    1532 pod_ready.go:81] duration metric: took 8.4485ms waiting for pod "kube-proxy-4gbrl" in "kube-system" namespace to be "Ready" ...
	I0229 02:42:45.563277    1532 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6r6j4" in "kube-system" namespace to be "Ready" ...
	I0229 02:42:45.729272    1532 request.go:629] Waited for 165.7218ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6r6j4
	I0229 02:42:45.729421    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6r6j4
	I0229 02:42:45.729421    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:45.729421    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:45.729421    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:45.733894    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:45.733894    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:45.733894    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:45 GMT
	I0229 02:42:45.733894    1532 round_trippers.go:580]     Audit-Id: 73735785-d973-4fb8-a4f5-93401373f12b
	I0229 02:42:45.733894    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:45.733894    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:45.733894    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:45.734021    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:45.734184    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6r6j4","generateName":"kube-proxy-","namespace":"kube-system","uid":"2b84b22d-3786-4f9e-a23a-c7cfc93bb671","resourceVersion":"1923","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"99934fe5-0d72-4e83-8f59-4a0b59969008","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"99934fe5-0d72-4e83-8f59-4a0b59969008\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5735 chars]
	I0229 02:42:45.929446    1532 request.go:629] Waited for 194.4118ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:45.929940    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:45.929992    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:45.929992    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:45.929992    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:45.934129    1532 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 02:42:45.934129    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:45.934129    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:46 GMT
	I0229 02:42:45.934129    1532 round_trippers.go:580]     Audit-Id: 37fd20f5-1f19-4e6a-84d9-26d049e4a9b7
	I0229 02:42:45.934129    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:45.934129    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:45.934129    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:45.934129    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:45.934129    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:45.934812    1532 pod_ready.go:92] pod "kube-proxy-6r6j4" in "kube-system" namespace has status "Ready":"True"
	I0229 02:42:45.934897    1532 pod_ready.go:81] duration metric: took 371.5987ms waiting for pod "kube-proxy-6r6j4" in "kube-system" namespace to be "Ready" ...
	I0229 02:42:45.934897    1532 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:42:46.132712    1532 request.go:629] Waited for 197.7056ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-314500
	I0229 02:42:46.132712    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-314500
	I0229 02:42:46.132712    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:46.132712    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:46.132712    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:46.137669    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:46.137669    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:46.137669    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:46.137669    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:46 GMT
	I0229 02:42:46.137669    1532 round_trippers.go:580]     Audit-Id: d52651ee-608b-4ed2-aba0-86a6c9b316e0
	I0229 02:42:46.137669    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:46.137669    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:46.137669    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:46.138193    1532 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-314500","namespace":"kube-system","uid":"31fcecc6-17de-43a6-892d-37cd915de64b","resourceVersion":"2006","creationTimestamp":"2024-02-29T02:15:52Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"3d9a79ff068a0922524863a8caa5053a","kubernetes.io/config.mirror":"3d9a79ff068a0922524863a8caa5053a","kubernetes.io/config.seen":"2024-02-29T02:15:52.221399886Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:15:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4901 chars]
	I0229 02:42:46.335467    1532 request.go:629] Waited for 196.596ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:46.335830    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes/multinode-314500
	I0229 02:42:46.335830    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:46.335830    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:46.335830    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:46.340675    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:46.340675    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:46.340675    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:46.340675    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:46.340675    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:46.340675    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:46.340675    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:46 GMT
	I0229 02:42:46.340675    1532 round_trippers.go:580]     Audit-Id: ff091a9f-7e11-40a3-9bb2-080c0dc6884a
	I0229 02:42:46.341196    1532 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T02:15:48Z","fieldsType":"FieldsV1","f [truncated 5236 chars]
	I0229 02:42:46.341885    1532 pod_ready.go:92] pod "kube-scheduler-multinode-314500" in "kube-system" namespace has status "Ready":"True"
	I0229 02:42:46.341973    1532 pod_ready.go:81] duration metric: took 407.0532ms waiting for pod "kube-scheduler-multinode-314500" in "kube-system" namespace to be "Ready" ...
	I0229 02:42:46.341973    1532 pod_ready.go:38] duration metric: took 16.8336558s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:42:46.342076    1532 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:42:46.352159    1532 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:42:46.379357    1532 command_runner.go:130] > 1764
	I0229 02:42:46.379430    1532 api_server.go:72] duration metric: took 22.9973833s to wait for apiserver process to appear ...
	I0229 02:42:46.379430    1532 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:42:46.379517    1532 api_server.go:253] Checking apiserver healthz at https://172.19.2.238:8443/healthz ...
	I0229 02:42:46.387427    1532 api_server.go:279] https://172.19.2.238:8443/healthz returned 200:
	ok
	I0229 02:42:46.387427    1532 round_trippers.go:463] GET https://172.19.2.238:8443/version
	I0229 02:42:46.387427    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:46.387427    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:46.387427    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:46.389008    1532 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 02:42:46.389621    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:46.389621    1532 round_trippers.go:580]     Audit-Id: 92aa30bd-81d9-475b-97e4-6e8fcd63cf76
	I0229 02:42:46.389621    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:46.389621    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:46.389621    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:46.389621    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:46.389621    1532 round_trippers.go:580]     Content-Length: 264
	I0229 02:42:46.389621    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:46 GMT
	I0229 02:42:46.389621    1532 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0229 02:42:46.389621    1532 api_server.go:141] control plane version: v1.28.4
	I0229 02:42:46.389621    1532 api_server.go:131] duration metric: took 10.1904ms to wait for apiserver health ...
	I0229 02:42:46.389621    1532 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:42:46.521976    1532 request.go:629] Waited for 132.0864ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods
	I0229 02:42:46.521976    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods
	I0229 02:42:46.521976    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:46.522171    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:46.522171    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:46.529571    1532 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 02:42:46.529571    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:46.529571    1532 round_trippers.go:580]     Audit-Id: a9ef5420-3445-4982-a012-2b92eeb07218
	I0229 02:42:46.529571    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:46.529571    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:46.529571    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:46.529571    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:46.529571    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:46 GMT
	I0229 02:42:46.530783    1532 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2049"},"items":[{"metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"2045","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 69073 chars]
	I0229 02:42:46.533544    1532 system_pods.go:59] 10 kube-system pods found
	I0229 02:42:46.533544    1532 system_pods.go:61] "coredns-5dd5756b68-8g6tg" [ef7fb259-9f24-4645-9eff-2b16f6789e1b] Running
	I0229 02:42:46.533544    1532 system_pods.go:61] "etcd-multinode-314500" [64dda041-1f1d-4866-aa39-62d21bd84e46] Running
	I0229 02:42:46.533544    1532 system_pods.go:61] "kindnet-6r7b8" [402c3ac1-05a9-45f1-aa7d-c0fb8ced6c87] Running
	I0229 02:42:46.533544    1532 system_pods.go:61] "kindnet-t9r77" [4620d417-744c-4049-82ab-79d1ee7f047c] Running
	I0229 02:42:46.533544    1532 system_pods.go:61] "kube-apiserver-multinode-314500" [baa3dc33-6d86-4748-9d57-c64f45dcfbf7] Running
	I0229 02:42:46.533544    1532 system_pods.go:61] "kube-controller-manager-multinode-314500" [58e57902-e113-44a9-b5b5-4aba2ba13491] Running
	I0229 02:42:46.533544    1532 system_pods.go:61] "kube-proxy-4gbrl" [accb56cb-79ee-4f16-b05e-91bf554c4a60] Running
	I0229 02:42:46.533544    1532 system_pods.go:61] "kube-proxy-6r6j4" [2b84b22d-3786-4f9e-a23a-c7cfc93bb671] Running
	I0229 02:42:46.533544    1532 system_pods.go:61] "kube-scheduler-multinode-314500" [31fcecc6-17de-43a6-892d-37cd915de64b] Running
	I0229 02:42:46.533544    1532 system_pods.go:61] "storage-provisioner" [9780520b-8ff9-408a-ab6f-41b63790ccd1] Running
	I0229 02:42:46.533544    1532 system_pods.go:74] duration metric: took 143.9154ms to wait for pod list to return data ...
	I0229 02:42:46.533544    1532 default_sa.go:34] waiting for default service account to be created ...
	I0229 02:42:46.725637    1532 request.go:629] Waited for 191.8574ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.238:8443/api/v1/namespaces/default/serviceaccounts
	I0229 02:42:46.725833    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/default/serviceaccounts
	I0229 02:42:46.725833    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:46.725833    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:46.725833    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:46.730177    1532 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 02:42:46.730177    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:46.730177    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:46.730177    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:46.730177    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:46.730177    1532 round_trippers.go:580]     Content-Length: 262
	I0229 02:42:46.730177    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:46 GMT
	I0229 02:42:46.730177    1532 round_trippers.go:580]     Audit-Id: fc20ff84-61e3-40cc-b461-8b475f6d3577
	I0229 02:42:46.730177    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:46.730177    1532 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"2049"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"a442432a-e4e1-4889-bfa8-e3967acc17f0","resourceVersion":"330","creationTimestamp":"2024-02-29T02:16:04Z"}}]}
	I0229 02:42:46.730856    1532 default_sa.go:45] found service account: "default"
	I0229 02:42:46.730856    1532 default_sa.go:55] duration metric: took 197.3005ms for default service account to be created ...
	I0229 02:42:46.730856    1532 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 02:42:46.929039    1532 request.go:629] Waited for 197.8097ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods
	I0229 02:42:46.929303    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/namespaces/kube-system/pods
	I0229 02:42:46.929303    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:46.929303    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:46.929303    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:46.944000    1532 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0229 02:42:46.944096    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:46.944096    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:46.944096    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:46.944096    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:47 GMT
	I0229 02:42:46.944096    1532 round_trippers.go:580]     Audit-Id: f48d082c-2512-4352-a544-5d529df77a80
	I0229 02:42:46.944182    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:46.944182    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:46.945724    1532 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2049"},"items":[{"metadata":{"name":"coredns-5dd5756b68-8g6tg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ef7fb259-9f24-4645-9eff-2b16f6789e1b","resourceVersion":"2045","creationTimestamp":"2024-02-29T02:16:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fbea7f44-b455-41a2-beaa-09d62a15c046","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T02:16:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbea7f44-b455-41a2-beaa-09d62a15c046\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 69073 chars]
	I0229 02:42:46.948796    1532 system_pods.go:86] 10 kube-system pods found
	I0229 02:42:46.948796    1532 system_pods.go:89] "coredns-5dd5756b68-8g6tg" [ef7fb259-9f24-4645-9eff-2b16f6789e1b] Running
	I0229 02:42:46.948796    1532 system_pods.go:89] "etcd-multinode-314500" [64dda041-1f1d-4866-aa39-62d21bd84e46] Running
	I0229 02:42:46.948796    1532 system_pods.go:89] "kindnet-6r7b8" [402c3ac1-05a9-45f1-aa7d-c0fb8ced6c87] Running
	I0229 02:42:46.948796    1532 system_pods.go:89] "kindnet-t9r77" [4620d417-744c-4049-82ab-79d1ee7f047c] Running
	I0229 02:42:46.948796    1532 system_pods.go:89] "kube-apiserver-multinode-314500" [baa3dc33-6d86-4748-9d57-c64f45dcfbf7] Running
	I0229 02:42:46.948796    1532 system_pods.go:89] "kube-controller-manager-multinode-314500" [58e57902-e113-44a9-b5b5-4aba2ba13491] Running
	I0229 02:42:46.948796    1532 system_pods.go:89] "kube-proxy-4gbrl" [accb56cb-79ee-4f16-b05e-91bf554c4a60] Running
	I0229 02:42:46.948796    1532 system_pods.go:89] "kube-proxy-6r6j4" [2b84b22d-3786-4f9e-a23a-c7cfc93bb671] Running
	I0229 02:42:46.948796    1532 system_pods.go:89] "kube-scheduler-multinode-314500" [31fcecc6-17de-43a6-892d-37cd915de64b] Running
	I0229 02:42:46.948796    1532 system_pods.go:89] "storage-provisioner" [9780520b-8ff9-408a-ab6f-41b63790ccd1] Running
	I0229 02:42:46.948796    1532 system_pods.go:126] duration metric: took 217.928ms to wait for k8s-apps to be running ...
	I0229 02:42:46.948796    1532 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 02:42:46.956925    1532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:42:46.989626    1532 system_svc.go:56] duration metric: took 40.8284ms WaitForService to wait for kubelet.
	I0229 02:42:46.989699    1532 kubeadm.go:581] duration metric: took 23.6076188s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 02:42:46.989772    1532 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:42:47.132105    1532 request.go:629] Waited for 142.2378ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.2.238:8443/api/v1/nodes
	I0229 02:42:47.132105    1532 round_trippers.go:463] GET https://172.19.2.238:8443/api/v1/nodes
	I0229 02:42:47.132105    1532 round_trippers.go:469] Request Headers:
	I0229 02:42:47.132105    1532 round_trippers.go:473]     Accept: application/json, */*
	I0229 02:42:47.132105    1532 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 02:42:47.137578    1532 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 02:42:47.137578    1532 round_trippers.go:577] Response Headers:
	I0229 02:42:47.137657    1532 round_trippers.go:580]     Audit-Id: 4e0a653e-1255-436b-bbd0-721176de08e9
	I0229 02:42:47.137657    1532 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 02:42:47.137657    1532 round_trippers.go:580]     Content-Type: application/json
	I0229 02:42:47.137657    1532 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ebcc1b5b-dfef-42fc-9730-9944d13f6ad6
	I0229 02:42:47.137739    1532 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 66f8db5d-7f02-4bed-a85c-0ff8ff518b75
	I0229 02:42:47.137739    1532 round_trippers.go:580]     Date: Thu, 29 Feb 2024 02:42:47 GMT
	I0229 02:42:47.138204    1532 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"2049"},"items":[{"metadata":{"name":"multinode-314500","uid":"87be2dc3-146c-40c3-8af3-e9ee82726bf4","resourceVersion":"2004","creationTimestamp":"2024-02-29T02:15:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-314500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-314500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T02_15_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 10085 chars]
	I0229 02:42:47.139146    1532 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:42:47.139222    1532 node_conditions.go:123] node cpu capacity is 2
	I0229 02:42:47.139222    1532 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:42:47.139222    1532 node_conditions.go:123] node cpu capacity is 2
	I0229 02:42:47.139222    1532 node_conditions.go:105] duration metric: took 149.4417ms to run NodePressure ...
	I0229 02:42:47.139222    1532 start.go:228] waiting for startup goroutines ...
	I0229 02:42:47.139298    1532 start.go:233] waiting for cluster config update ...
	I0229 02:42:47.139298    1532 start.go:242] writing updated cluster config ...
	I0229 02:42:47.154492    1532 config.go:182] Loaded profile config "multinode-314500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 02:42:47.154492    1532 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\config.json ...
	I0229 02:42:47.158114    1532 out.go:177] * Starting worker node multinode-314500-m02 in cluster multinode-314500
	I0229 02:42:47.158515    1532 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 02:42:47.158631    1532 cache.go:56] Caching tarball of preloaded images
	I0229 02:42:47.158872    1532 preload.go:174] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 02:42:47.159062    1532 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0229 02:42:47.159362    1532 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-314500\config.json ...
	I0229 02:42:47.161474    1532 start.go:365] acquiring machines lock for multinode-314500-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 02:42:47.161703    1532 start.go:369] acquired machines lock for "multinode-314500-m02" in 112.6µs
	I0229 02:42:47.161806    1532 start.go:96] Skipping create...Using existing machine configuration
	I0229 02:42:47.161806    1532 fix.go:54] fixHost starting: m02
	I0229 02:42:47.162349    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:42:49.145688    1532 main.go:141] libmachine: [stdout =====>] : Off
	
	I0229 02:42:49.145688    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:42:49.145688    1532 fix.go:102] recreateIfNeeded on multinode-314500-m02: state=Stopped err=<nil>
	W0229 02:42:49.145688    1532 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 02:42:49.146500    1532 out.go:177] * Restarting existing hyperv VM for "multinode-314500-m02" ...
	I0229 02:42:49.147122    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-314500-m02
	I0229 02:42:51.854678    1532 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:42:51.854678    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:42:51.854772    1532 main.go:141] libmachine: Waiting for host to start...
	I0229 02:42:51.854772    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:42:53.926012    1532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:42:53.926012    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:42:53.926092    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:42:56.273041    1532 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:42:56.273041    1532 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:42:57.283768    1532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	
	
	==> Docker <==
	Feb 29 02:42:35 multinode-314500 dockerd[1022]: time="2024-02-29T02:42:35.450292943Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 02:42:35 multinode-314500 dockerd[1022]: time="2024-02-29T02:42:35.450509095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 02:42:35 multinode-314500 dockerd[1022]: time="2024-02-29T02:42:35.451189147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 02:42:35 multinode-314500 dockerd[1022]: time="2024-02-29T02:42:35.454278372Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 02:42:35 multinode-314500 dockerd[1022]: time="2024-02-29T02:42:35.454339159Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 02:42:35 multinode-314500 dockerd[1022]: time="2024-02-29T02:42:35.454352456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 02:42:35 multinode-314500 dockerd[1022]: time="2024-02-29T02:42:35.454747170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 02:42:35 multinode-314500 cri-dockerd[1236]: time="2024-02-29T02:42:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/870b61ea995842f26dc4c2e6c7548e01eefad2e8907813a1b658f0bd0542b341/resolv.conf as [nameserver 172.19.0.1]"
	Feb 29 02:42:35 multinode-314500 cri-dockerd[1236]: time="2024-02-29T02:42:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3144f1457b845b90e53b26b97d36da0bf8211e7c8021d711ebbf63bf86082eeb/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Feb 29 02:42:35 multinode-314500 dockerd[1022]: time="2024-02-29T02:42:35.858641987Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 02:42:35 multinode-314500 dockerd[1022]: time="2024-02-29T02:42:35.858856039Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 02:42:35 multinode-314500 dockerd[1022]: time="2024-02-29T02:42:35.858895330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 02:42:35 multinode-314500 dockerd[1022]: time="2024-02-29T02:42:35.859489398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 02:42:35 multinode-314500 dockerd[1022]: time="2024-02-29T02:42:35.914158051Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 02:42:35 multinode-314500 dockerd[1022]: time="2024-02-29T02:42:35.914305618Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 02:42:35 multinode-314500 dockerd[1022]: time="2024-02-29T02:42:35.914321315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 02:42:35 multinode-314500 dockerd[1022]: time="2024-02-29T02:42:35.914818304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 02:42:50 multinode-314500 dockerd[1015]: time="2024-02-29T02:42:50.446831249Z" level=info msg="ignoring event" container=502b73f7930d7fce9b430c61831626145a33f204fa80ff9998ec68aac1c8a077 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 02:42:50 multinode-314500 dockerd[1022]: time="2024-02-29T02:42:50.449203584Z" level=info msg="shim disconnected" id=502b73f7930d7fce9b430c61831626145a33f204fa80ff9998ec68aac1c8a077 namespace=moby
	Feb 29 02:42:50 multinode-314500 dockerd[1022]: time="2024-02-29T02:42:50.449570443Z" level=warning msg="cleaning up after shim disconnected" id=502b73f7930d7fce9b430c61831626145a33f204fa80ff9998ec68aac1c8a077 namespace=moby
	Feb 29 02:42:50 multinode-314500 dockerd[1022]: time="2024-02-29T02:42:50.449672332Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 29 02:43:05 multinode-314500 dockerd[1022]: time="2024-02-29T02:43:05.530210281Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 02:43:05 multinode-314500 dockerd[1022]: time="2024-02-29T02:43:05.530427872Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 02:43:05 multinode-314500 dockerd[1022]: time="2024-02-29T02:43:05.530465770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 02:43:05 multinode-314500 dockerd[1022]: time="2024-02-29T02:43:05.530595765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	d1567420460c8       6e38f40d628db       15 seconds ago       Running             storage-provisioner       4                   56bb6ae74feba       storage-provisioner
	dc39865b78626       8c811b4aec35f       45 seconds ago       Running             busybox                   2                   3144f1457b845       busybox-5b5d89c9d6-qcblm
	4528b2deb4fa0       ead0a4a53df89       45 seconds ago       Running             coredns                   2                   870b61ea99584       coredns-5dd5756b68-8g6tg
	e23a2a2e18aab       4950bb10b3f87       About a minute ago   Running             kindnet-cni               2                   25ee94b28f883       kindnet-t9r77
	502b73f7930d7       6e38f40d628db       About a minute ago   Exited              storage-provisioner       3                   56bb6ae74feba       storage-provisioner
	b7b87531ce6d4       83f6cc407eed8       About a minute ago   Running             kube-proxy                2                   c02d2b0969d44       kube-proxy-6r6j4
	7fa73788c8843       d058aa5ab969c       About a minute ago   Running             kube-controller-manager   2                   4bfa92cd9a0aa       kube-controller-manager-multinode-314500
	15f479e00cb1a       e3db313c6dbc0       About a minute ago   Running             kube-scheduler            2                   c5b421326eea7       kube-scheduler-multinode-314500
	058b01cc9c824       73deb9a3f7025       About a minute ago   Running             etcd                      0                   735cfa3e2813e       etcd-multinode-314500
	d8dd429a7c8d1       7fe0e6f37db33       About a minute ago   Running             kube-apiserver            0                   b905d5a3791f5       kube-apiserver-multinode-314500
	745f9e18fc6ab       8c811b4aec35f       10 minutes ago       Exited              busybox                   1                   509440c9783b9       busybox-5b5d89c9d6-qcblm
	5814ae38cea0e       ead0a4a53df89       10 minutes ago       Exited              coredns                   1                   e767eb4735017       coredns-5dd5756b68-8g6tg
	1993ffe76ae7f       4950bb10b3f87       10 minutes ago       Exited              kindnet-cni               1                   349bdaee8eb96       kindnet-t9r77
	341278d602ddd       83f6cc407eed8       10 minutes ago       Exited              kube-proxy                1                   02fbddb29c60a       kube-proxy-6r6j4
	f1cb36bcb3f3d       d058aa5ab969c       10 minutes ago       Exited              kube-controller-manager   1                   340bdcfacbe25       kube-controller-manager-multinode-314500
	41745010357fe       e3db313c6dbc0       10 minutes ago       Exited              kube-scheduler            1                   007d6c9a53e16       kube-scheduler-multinode-314500
	
	
	==> coredns [4528b2deb4fa] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c43704ee218c3500d97c54254d76c1d56cc0443961fea557ef898f1da8154a1212605c10203ede1e288070d97e67d107ee3d60ae9c1e40b060414629f7811dd
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:47000 - 61378 "HINFO IN 4705092909102924593.7235860332672878307. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.318122333s
	
	
	==> coredns [5814ae38cea0] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c43704ee218c3500d97c54254d76c1d56cc0443961fea557ef898f1da8154a1212605c10203ede1e288070d97e67d107ee3d60ae9c1e40b060414629f7811dd
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:32846 - 7493 "HINFO IN 6477765139827559342.7079461035665089981. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.126040178s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-314500
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-314500
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61
	                    minikube.k8s.io/name=multinode-314500
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_29T02_15_53_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 02:15:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-314500
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Feb 2024 02:43:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 02:42:29 +0000   Thu, 29 Feb 2024 02:15:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 02:42:29 +0000   Thu, 29 Feb 2024 02:15:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 02:42:29 +0000   Thu, 29 Feb 2024 02:15:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Feb 2024 02:42:29 +0000   Thu, 29 Feb 2024 02:42:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.2.238
	  Hostname:    multinode-314500
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 8a4f355787314726989e6920d7da282f
	  System UUID:                d0919ea2-7b7b-e246-9348-925d639776b8
	  Boot ID:                    76417ef9-4559-4132-ab88-46c7d0d49df1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-qcblm                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 coredns-5dd5756b68-8g6tg                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 etcd-multinode-314500                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         61s
	  kube-system                 kindnet-t9r77                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      27m
	  kube-system                 kube-apiserver-multinode-314500             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	  kube-system                 kube-controller-manager-multinode-314500    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-6r6j4                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-multinode-314500             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27m                kube-proxy       
	  Normal  Starting                 59s                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeAllocatableEnforced  27m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    27m (x8 over 27m)  kubelet          Node multinode-314500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m (x7 over 27m)  kubelet          Node multinode-314500 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  27m (x8 over 27m)  kubelet          Node multinode-314500 status is now: NodeHasSufficientMemory
	  Normal  Starting                 27m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27m                kubelet          Node multinode-314500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m                kubelet          Node multinode-314500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m                kubelet          Node multinode-314500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           27m                node-controller  Node multinode-314500 event: Registered Node multinode-314500 in Controller
	  Normal  NodeReady                27m                kubelet          Node multinode-314500 status is now: NodeReady
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node multinode-314500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node multinode-314500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node multinode-314500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10m                node-controller  Node multinode-314500 event: Registered Node multinode-314500 in Controller
	  Normal  Starting                 66s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  66s (x8 over 66s)  kubelet          Node multinode-314500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    66s (x8 over 66s)  kubelet          Node multinode-314500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     66s (x7 over 66s)  kubelet          Node multinode-314500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  66s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           49s                node-controller  Node multinode-314500 event: Registered Node multinode-314500 in Controller
	
	
	Name:               multinode-314500-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-314500-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61
	                    minikube.k8s.io/name=multinode-314500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_02_29T02_37_29_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 02:35:28 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-314500-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Feb 2024 02:39:23 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 29 Feb 2024 02:35:33 +0000   Thu, 29 Feb 2024 02:43:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 29 Feb 2024 02:35:33 +0000   Thu, 29 Feb 2024 02:43:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 29 Feb 2024 02:35:33 +0000   Thu, 29 Feb 2024 02:43:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 29 Feb 2024 02:35:33 +0000   Thu, 29 Feb 2024 02:43:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.19.4.42
	  Hostname:    multinode-314500-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 e610aac6e4be42609bc76ef694a8facf
	  System UUID:                b1627b4d-7d75-ed47-9ee8-e9d67e74df72
	  Boot ID:                    a1e79ebd-9754-4a2d-a740-898f5164b060
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-vh2zk    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m57s
	  kube-system                 kindnet-6r7b8               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      24m
	  kube-system                 kube-proxy-4gbrl            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 24m                    kube-proxy       
	  Normal  Starting                 7m49s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  24m (x5 over 24m)      kubelet          Node multinode-314500-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24m (x5 over 24m)      kubelet          Node multinode-314500-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24m (x5 over 24m)      kubelet          Node multinode-314500-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                24m                    kubelet          Node multinode-314500-m02 status is now: NodeReady
	  Normal  Starting                 7m52s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m52s (x2 over 7m52s)  kubelet          Node multinode-314500-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m52s (x2 over 7m52s)  kubelet          Node multinode-314500-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m52s (x2 over 7m52s)  kubelet          Node multinode-314500-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m47s                  kubelet          Node multinode-314500-m02 status is now: NodeReady
	  Normal  RegisteredNode           49s                    node-controller  Node multinode-314500-m02 event: Registered Node multinode-314500-m02 in Controller
	  Normal  NodeNotReady             9s                     node-controller  Node multinode-314500-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.061007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.025901] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	              * this clock source is slow. Consider trying other clock sources
	[  +5.960603] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.364589] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.258475] systemd-fstab-generator[113]: Ignoring "noauto" option for root device
	[  +7.602635] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Feb29 02:41] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.182109] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[Feb29 02:42] systemd-fstab-generator[941]: Ignoring "noauto" option for root device
	[  +0.115685] kauditd_printk_skb: 73 callbacks suppressed
	[  +0.503940] systemd-fstab-generator[981]: Ignoring "noauto" option for root device
	[  +0.213471] systemd-fstab-generator[993]: Ignoring "noauto" option for root device
	[  +0.238654] systemd-fstab-generator[1007]: Ignoring "noauto" option for root device
	[  +1.934297] systemd-fstab-generator[1189]: Ignoring "noauto" option for root device
	[  +0.198262] systemd-fstab-generator[1201]: Ignoring "noauto" option for root device
	[  +0.214191] systemd-fstab-generator[1213]: Ignoring "noauto" option for root device
	[  +0.276769] systemd-fstab-generator[1228]: Ignoring "noauto" option for root device
	[  +3.865166] systemd-fstab-generator[1449]: Ignoring "noauto" option for root device
	[  +0.111114] kauditd_printk_skb: 205 callbacks suppressed
	[  +6.020012] kauditd_printk_skb: 62 callbacks suppressed
	[ +12.109613] kauditd_printk_skb: 48 callbacks suppressed
	[  +8.479621] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [058b01cc9c82] <==
	{"level":"info","ts":"2024-02-29T02:42:15.44232Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-29T02:42:15.442502Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.19.2.238:2380"}
	{"level":"info","ts":"2024-02-29T02:42:15.442538Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.19.2.238:2380"}
	{"level":"info","ts":"2024-02-29T02:42:15.442517Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-29T02:42:15.443804Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"288caba846397842 switched to configuration voters=(2921898997477636162)"}
	{"level":"info","ts":"2024-02-29T02:42:15.444067Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b70ab9772a44d22c","local-member-id":"288caba846397842","added-peer-id":"288caba846397842","added-peer-peer-urls":["https://172.19.2.165:2380"]}
	{"level":"info","ts":"2024-02-29T02:42:15.442471Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-02-29T02:42:15.444307Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b70ab9772a44d22c","local-member-id":"288caba846397842","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T02:42:15.444837Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T02:42:15.444724Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"288caba846397842","initial-advertise-peer-urls":["https://172.19.2.238:2380"],"listen-peer-urls":["https://172.19.2.238:2380"],"advertise-client-urls":["https://172.19.2.238:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.19.2.238:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-02-29T02:42:15.444754Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-29T02:42:16.41519Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"288caba846397842 is starting a new election at term 3"}
	{"level":"info","ts":"2024-02-29T02:42:16.415362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"288caba846397842 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-02-29T02:42:16.415386Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"288caba846397842 received MsgPreVoteResp from 288caba846397842 at term 3"}
	{"level":"info","ts":"2024-02-29T02:42:16.415432Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"288caba846397842 became candidate at term 4"}
	{"level":"info","ts":"2024-02-29T02:42:16.415466Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"288caba846397842 received MsgVoteResp from 288caba846397842 at term 4"}
	{"level":"info","ts":"2024-02-29T02:42:16.415477Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"288caba846397842 became leader at term 4"}
	{"level":"info","ts":"2024-02-29T02:42:16.415487Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 288caba846397842 elected leader 288caba846397842 at term 4"}
	{"level":"info","ts":"2024-02-29T02:42:16.42092Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"288caba846397842","local-member-attributes":"{Name:multinode-314500 ClientURLs:[https://172.19.2.238:2379]}","request-path":"/0/members/288caba846397842/attributes","cluster-id":"b70ab9772a44d22c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-29T02:42:16.421097Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T02:42:16.422789Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-29T02:42:16.422861Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T02:42:16.423328Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-29T02:42:16.423949Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-29T02:42:16.441183Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.19.2.238:2379"}
	
	
	==> kernel <==
	 02:43:20 up 2 min,  0 users,  load average: 0.34, 0.19, 0.07
	Linux multinode-314500 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [1993ffe76ae7] <==
	I0229 02:38:15.360997       1 main.go:250] Node multinode-314500-m03 has CIDR [10.244.2.0/24] 
	I0229 02:38:25.368627       1 main.go:223] Handling node with IPs: map[172.19.2.252:{}]
	I0229 02:38:25.368732       1 main.go:227] handling current node
	I0229 02:38:25.368745       1 main.go:223] Handling node with IPs: map[172.19.4.42:{}]
	I0229 02:38:25.368753       1 main.go:250] Node multinode-314500-m02 has CIDR [10.244.1.0/24] 
	I0229 02:38:35.381486       1 main.go:223] Handling node with IPs: map[172.19.2.252:{}]
	I0229 02:38:35.381599       1 main.go:227] handling current node
	I0229 02:38:35.381615       1 main.go:223] Handling node with IPs: map[172.19.4.42:{}]
	I0229 02:38:35.381624       1 main.go:250] Node multinode-314500-m02 has CIDR [10.244.1.0/24] 
	I0229 02:38:45.388958       1 main.go:223] Handling node with IPs: map[172.19.2.252:{}]
	I0229 02:38:45.389396       1 main.go:227] handling current node
	I0229 02:38:45.389575       1 main.go:223] Handling node with IPs: map[172.19.4.42:{}]
	I0229 02:38:45.389697       1 main.go:250] Node multinode-314500-m02 has CIDR [10.244.1.0/24] 
	I0229 02:38:55.395827       1 main.go:223] Handling node with IPs: map[172.19.2.252:{}]
	I0229 02:38:55.395904       1 main.go:227] handling current node
	I0229 02:38:55.395916       1 main.go:223] Handling node with IPs: map[172.19.4.42:{}]
	I0229 02:38:55.395923       1 main.go:250] Node multinode-314500-m02 has CIDR [10.244.1.0/24] 
	I0229 02:39:05.402030       1 main.go:223] Handling node with IPs: map[172.19.2.252:{}]
	I0229 02:39:05.402067       1 main.go:227] handling current node
	I0229 02:39:05.402078       1 main.go:223] Handling node with IPs: map[172.19.4.42:{}]
	I0229 02:39:05.402084       1 main.go:250] Node multinode-314500-m02 has CIDR [10.244.1.0/24] 
	I0229 02:39:15.416189       1 main.go:223] Handling node with IPs: map[172.19.2.252:{}]
	I0229 02:39:15.416292       1 main.go:227] handling current node
	I0229 02:39:15.416307       1 main.go:223] Handling node with IPs: map[172.19.4.42:{}]
	I0229 02:39:15.416315       1 main.go:250] Node multinode-314500-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [e23a2a2e18aa] <==
	I0229 02:42:21.480801       1 main.go:223] Handling node with IPs: map[172.19.2.238:{}]
	I0229 02:42:21.485471       1 main.go:227] handling current node
	I0229 02:42:21.485977       1 main.go:223] Handling node with IPs: map[172.19.4.42:{}]
	I0229 02:42:21.491466       1 main.go:250] Node multinode-314500-m02 has CIDR [10.244.1.0/24] 
	I0229 02:42:21.491831       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.19.4.42 Flags: [] Table: 0} 
	I0229 02:42:31.513091       1 main.go:223] Handling node with IPs: map[172.19.2.238:{}]
	I0229 02:42:31.513216       1 main.go:227] handling current node
	I0229 02:42:31.513231       1 main.go:223] Handling node with IPs: map[172.19.4.42:{}]
	I0229 02:42:31.513239       1 main.go:250] Node multinode-314500-m02 has CIDR [10.244.1.0/24] 
	I0229 02:42:41.523508       1 main.go:223] Handling node with IPs: map[172.19.2.238:{}]
	I0229 02:42:41.523606       1 main.go:227] handling current node
	I0229 02:42:41.523621       1 main.go:223] Handling node with IPs: map[172.19.4.42:{}]
	I0229 02:42:41.523628       1 main.go:250] Node multinode-314500-m02 has CIDR [10.244.1.0/24] 
	I0229 02:42:51.531173       1 main.go:223] Handling node with IPs: map[172.19.2.238:{}]
	I0229 02:42:51.531274       1 main.go:227] handling current node
	I0229 02:42:51.531287       1 main.go:223] Handling node with IPs: map[172.19.4.42:{}]
	I0229 02:42:51.531295       1 main.go:250] Node multinode-314500-m02 has CIDR [10.244.1.0/24] 
	I0229 02:43:01.543241       1 main.go:223] Handling node with IPs: map[172.19.2.238:{}]
	I0229 02:43:01.543351       1 main.go:227] handling current node
	I0229 02:43:01.543365       1 main.go:223] Handling node with IPs: map[172.19.4.42:{}]
	I0229 02:43:01.543373       1 main.go:250] Node multinode-314500-m02 has CIDR [10.244.1.0/24] 
	I0229 02:43:11.552550       1 main.go:223] Handling node with IPs: map[172.19.2.238:{}]
	I0229 02:43:11.553317       1 main.go:227] handling current node
	I0229 02:43:11.553582       1 main.go:223] Handling node with IPs: map[172.19.4.42:{}]
	I0229 02:43:11.553612       1 main.go:250] Node multinode-314500-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [d8dd429a7c8d] <==
	I0229 02:42:18.944257       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0229 02:42:18.944276       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0229 02:42:19.109748       1 shared_informer.go:318] Caches are synced for configmaps
	I0229 02:42:19.109825       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0229 02:42:19.118945       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0229 02:42:19.140663       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0229 02:42:19.147084       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0229 02:42:19.167605       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0229 02:42:19.167632       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0229 02:42:19.174596       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0229 02:42:19.180348       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0229 02:42:19.180786       1 aggregator.go:166] initial CRD sync complete...
	I0229 02:42:19.180944       1 autoregister_controller.go:141] Starting autoregister controller
	I0229 02:42:19.181032       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0229 02:42:19.181137       1 cache.go:39] Caches are synced for autoregister controller
	I0229 02:42:19.873444       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0229 02:42:20.372352       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.19.2.238 172.19.2.252]
	I0229 02:42:20.374698       1 controller.go:624] quota admission added evaluator for: endpoints
	I0229 02:42:20.388170       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0229 02:42:21.888949       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0229 02:42:22.060151       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0229 02:42:22.074379       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0229 02:42:22.159019       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0229 02:42:22.170249       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0229 02:42:40.380951       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.19.2.238]
	
	
	==> kube-controller-manager [7fa73788c884] <==
	I0229 02:42:31.927660       1 shared_informer.go:318] Caches are synced for PVC protection
	I0229 02:42:31.934683       1 shared_informer.go:318] Caches are synced for ephemeral
	I0229 02:42:31.946303       1 shared_informer.go:318] Caches are synced for resource quota
	I0229 02:42:31.953276       1 shared_informer.go:318] Caches are synced for attach detach
	I0229 02:42:31.986100       1 shared_informer.go:318] Caches are synced for stateful set
	I0229 02:42:31.988150       1 shared_informer.go:318] Caches are synced for expand
	I0229 02:42:31.988444       1 shared_informer.go:318] Caches are synced for persistent volume
	I0229 02:42:32.104012       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="224.88958ms"
	I0229 02:42:32.105109       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="52.985µs"
	I0229 02:42:32.106941       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="225.270734ms"
	I0229 02:42:32.108125       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="72.38µs"
	I0229 02:42:32.351850       1 shared_informer.go:318] Caches are synced for garbage collector
	I0229 02:42:32.438590       1 shared_informer.go:318] Caches are synced for garbage collector
	I0229 02:42:32.438950       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0229 02:42:36.818578       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="204.857µs"
	I0229 02:42:36.860906       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="8.905611ms"
	I0229 02:42:36.861541       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="140.97µs"
	I0229 02:42:45.291603       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="14.566725ms"
	I0229 02:42:45.291701       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="50.893µs"
	I0229 02:43:11.948739       1 event.go:307] "Event occurred" object="multinode-314500-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-314500-m02 status is now: NodeNotReady"
	I0229 02:43:11.964137       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-vh2zk" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0229 02:43:11.976611       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="18.271459ms"
	I0229 02:43:11.977345       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="114.298µs"
	I0229 02:43:11.983579       1 event.go:307] "Event occurred" object="kube-system/kindnet-6r7b8" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0229 02:43:12.005566       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-4gbrl" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	
	==> kube-controller-manager [f1cb36bcb3f3] <==
	I0229 02:35:28.332987       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-314500-m02\" does not exist"
	I0229 02:35:28.335731       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-826w2" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-826w2"
	I0229 02:35:28.344556       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-314500-m02" podCIDRs=["10.244.1.0/24"]
	I0229 02:35:29.195303       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="76.883µs"
	I0229 02:35:33.466693       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-314500-m02"
	I0229 02:35:33.491809       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="70.686µs"
	I0229 02:35:35.604478       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-826w2" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-826w2"
	I0229 02:35:43.315234       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="135.372µs"
	I0229 02:35:43.326027       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="244.15µs"
	I0229 02:35:43.342657       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="74.085µs"
	I0229 02:35:43.599445       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="77.784µs"
	I0229 02:35:43.602538       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="53.089µs"
	I0229 02:35:44.636172       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="13.402192ms"
	I0229 02:35:44.637027       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="366.726µs"
	I0229 02:37:26.601733       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-314500-m02"
	I0229 02:37:27.923902       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-314500-m02"
	I0229 02:37:27.926815       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-314500-m03\" does not exist"
	I0229 02:37:27.944211       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-314500-m03" podCIDRs=["10.244.2.0/24"]
	I0229 02:37:36.144739       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-314500-m02"
	I0229 02:38:23.526183       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-314500-m02"
	I0229 02:38:25.659701       1 event.go:307] "Event occurred" object="multinode-314500-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-314500-m03 event: Removing Node multinode-314500-m03 from Controller"
	I0229 02:39:15.479189       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kindnet-7g9t8"
	I0229 02:39:15.512192       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kindnet-7g9t8"
	I0229 02:39:15.512242       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kube-proxy-zvlt2"
	I0229 02:39:15.541467       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-proxy-zvlt2"
	
	
	==> kube-proxy [341278d602dd] <==
	I0229 02:33:04.215978       1 server_others.go:69] "Using iptables proxy"
	I0229 02:33:04.251984       1 node.go:141] Successfully retrieved node IP: 172.19.2.252
	I0229 02:33:04.360615       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0229 02:33:04.360657       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0229 02:33:04.365625       1 server_others.go:152] "Using iptables Proxier"
	I0229 02:33:04.368633       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0229 02:33:04.369106       1 server.go:846] "Version info" version="v1.28.4"
	I0229 02:33:04.369119       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 02:33:04.372544       1 config.go:188] "Starting service config controller"
	I0229 02:33:04.374189       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0229 02:33:04.374236       1 config.go:315] "Starting node config controller"
	I0229 02:33:04.374243       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0229 02:33:04.381822       1 config.go:97] "Starting endpoint slice config controller"
	I0229 02:33:04.381894       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0229 02:33:04.475033       1 shared_informer.go:318] Caches are synced for service config
	I0229 02:33:04.475731       1 shared_informer.go:318] Caches are synced for node config
	I0229 02:33:04.482714       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [b7b87531ce6d] <==
	I0229 02:42:20.548480       1 server_others.go:69] "Using iptables proxy"
	I0229 02:42:20.596292       1 node.go:141] Successfully retrieved node IP: 172.19.2.238
	I0229 02:42:20.690709       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0229 02:42:20.690834       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0229 02:42:20.697336       1 server_others.go:152] "Using iptables Proxier"
	I0229 02:42:20.698896       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0229 02:42:20.700676       1 server.go:846] "Version info" version="v1.28.4"
	I0229 02:42:20.700706       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 02:42:20.703782       1 config.go:188] "Starting service config controller"
	I0229 02:42:20.705205       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0229 02:42:20.705292       1 config.go:97] "Starting endpoint slice config controller"
	I0229 02:42:20.705304       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0229 02:42:20.707817       1 config.go:315] "Starting node config controller"
	I0229 02:42:20.708058       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0229 02:42:20.805509       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0229 02:42:20.805567       1 shared_informer.go:318] Caches are synced for service config
	I0229 02:42:20.808326       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [15f479e00cb1] <==
	I0229 02:42:18.133334       1 serving.go:348] Generated self-signed cert in-memory
	I0229 02:42:19.436283       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0229 02:42:19.436322       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 02:42:19.450455       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0229 02:42:19.451142       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0229 02:42:19.451170       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0229 02:42:19.452126       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0229 02:42:19.453047       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0229 02:42:19.453080       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0229 02:42:19.453099       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0229 02:42:19.453104       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0229 02:42:19.551713       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I0229 02:42:19.553238       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0229 02:42:19.553280       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [41745010357f] <==
	I0229 02:32:59.773752       1 serving.go:348] Generated self-signed cert in-memory
	W0229 02:33:02.542906       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0229 02:33:02.543166       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0229 02:33:02.543526       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0229 02:33:02.543686       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0229 02:33:02.659015       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0229 02:33:02.659400       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 02:33:02.665902       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0229 02:33:02.666208       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0229 02:33:02.666489       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0229 02:33:02.667821       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0229 02:33:02.768883       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0229 02:39:24.639519       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0229 02:39:24.639561       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0229 02:39:24.643331       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Feb 29 02:42:24 multinode-314500 kubelet[1456]: E0229 02:42:24.444843    1456 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	Feb 29 02:42:25 multinode-314500 kubelet[1456]: E0229 02:42:25.354225    1456 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-8g6tg" podUID="ef7fb259-9f24-4645-9eff-2b16f6789e1b"
	Feb 29 02:42:25 multinode-314500 kubelet[1456]: E0229 02:42:25.354521    1456 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-qcblm" podUID="97a45dff-5653-45e8-9aac-76dbca48c759"
	Feb 29 02:42:26 multinode-314500 kubelet[1456]: E0229 02:42:26.913863    1456 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Feb 29 02:42:26 multinode-314500 kubelet[1456]: E0229 02:42:26.914031    1456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ef7fb259-9f24-4645-9eff-2b16f6789e1b-config-volume podName:ef7fb259-9f24-4645-9eff-2b16f6789e1b nodeName:}" failed. No retries permitted until 2024-02-29 02:42:34.914012895 +0000 UTC m=+21.039286199 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ef7fb259-9f24-4645-9eff-2b16f6789e1b-config-volume") pod "coredns-5dd5756b68-8g6tg" (UID: "ef7fb259-9f24-4645-9eff-2b16f6789e1b") : object "kube-system"/"coredns" not registered
	Feb 29 02:42:27 multinode-314500 kubelet[1456]: E0229 02:42:27.015072    1456 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Feb 29 02:42:27 multinode-314500 kubelet[1456]: E0229 02:42:27.015223    1456 projected.go:198] Error preparing data for projected volume kube-api-access-4fv6k for pod default/busybox-5b5d89c9d6-qcblm: object "default"/"kube-root-ca.crt" not registered
	Feb 29 02:42:27 multinode-314500 kubelet[1456]: E0229 02:42:27.015313    1456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/97a45dff-5653-45e8-9aac-76dbca48c759-kube-api-access-4fv6k podName:97a45dff-5653-45e8-9aac-76dbca48c759 nodeName:}" failed. No retries permitted until 2024-02-29 02:42:35.015295871 +0000 UTC m=+21.140569275 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-4fv6k" (UniqueName: "kubernetes.io/projected/97a45dff-5653-45e8-9aac-76dbca48c759-kube-api-access-4fv6k") pod "busybox-5b5d89c9d6-qcblm" (UID: "97a45dff-5653-45e8-9aac-76dbca48c759") : object "default"/"kube-root-ca.crt" not registered
	Feb 29 02:42:27 multinode-314500 kubelet[1456]: E0229 02:42:27.354868    1456 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-8g6tg" podUID="ef7fb259-9f24-4645-9eff-2b16f6789e1b"
	Feb 29 02:42:27 multinode-314500 kubelet[1456]: E0229 02:42:27.355676    1456 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-qcblm" podUID="97a45dff-5653-45e8-9aac-76dbca48c759"
	Feb 29 02:42:29 multinode-314500 kubelet[1456]: E0229 02:42:29.354499    1456 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-qcblm" podUID="97a45dff-5653-45e8-9aac-76dbca48c759"
	Feb 29 02:42:29 multinode-314500 kubelet[1456]: E0229 02:42:29.354700    1456 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-8g6tg" podUID="ef7fb259-9f24-4645-9eff-2b16f6789e1b"
	Feb 29 02:42:35 multinode-314500 kubelet[1456]: I0229 02:42:35.674097    1456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="870b61ea995842f26dc4c2e6c7548e01eefad2e8907813a1b658f0bd0542b341"
	Feb 29 02:42:35 multinode-314500 kubelet[1456]: I0229 02:42:35.757623    1456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3144f1457b845b90e53b26b97d36da0bf8211e7c8021d711ebbf63bf86082eeb"
	Feb 29 02:42:51 multinode-314500 kubelet[1456]: I0229 02:42:51.031797    1456 scope.go:117] "RemoveContainer" containerID="72b2d832587c88217606efd3aafe034b3446970ed2a0fa9b849ba309f91c0154"
	Feb 29 02:42:51 multinode-314500 kubelet[1456]: I0229 02:42:51.032126    1456 scope.go:117] "RemoveContainer" containerID="502b73f7930d7fce9b430c61831626145a33f204fa80ff9998ec68aac1c8a077"
	Feb 29 02:42:51 multinode-314500 kubelet[1456]: E0229 02:42:51.033037    1456 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(9780520b-8ff9-408a-ab6f-41b63790ccd1)\"" pod="kube-system/storage-provisioner" podUID="9780520b-8ff9-408a-ab6f-41b63790ccd1"
	Feb 29 02:43:05 multinode-314500 kubelet[1456]: I0229 02:43:05.354105    1456 scope.go:117] "RemoveContainer" containerID="502b73f7930d7fce9b430c61831626145a33f204fa80ff9998ec68aac1c8a077"
	Feb 29 02:43:14 multinode-314500 kubelet[1456]: E0229 02:43:14.409887    1456 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 02:43:14 multinode-314500 kubelet[1456]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 02:43:14 multinode-314500 kubelet[1456]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 02:43:14 multinode-314500 kubelet[1456]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 02:43:14 multinode-314500 kubelet[1456]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 02:43:14 multinode-314500 kubelet[1456]: I0229 02:43:14.747777    1456 scope.go:117] "RemoveContainer" containerID="ada445c976af30c7560cd9ec6018d9d89535db860f96d5c1bf37ab1a865dbde2"
	Feb 29 02:43:14 multinode-314500 kubelet[1456]: I0229 02:43:14.779638    1456 scope.go:117] "RemoveContainer" containerID="795e8c6845079d0e72c0146f9cef3420858d1c834962679bae1279dbd8b9e453"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 02:43:12.708658    9952 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-314500 -n multinode-314500
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-314500 -n multinode-314500: (11.2579314s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-314500 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartMultiNode (190.53s)

                                                
                                    
x
+
TestPreload (274.56s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-103800 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p test-preload-103800 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4: exit status 90 (3m20.0636699s)

                                                
                                                
-- stdout --
	* [test-preload-103800] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18063
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting control plane node test-preload-103800 in cluster test-preload-103800
	* Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 02:45:06.812160    9388 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0229 02:45:06.863528    9388 out.go:291] Setting OutFile to fd 1180 ...
	I0229 02:45:06.863528    9388 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:45:06.863528    9388 out.go:304] Setting ErrFile to fd 1552...
	I0229 02:45:06.864549    9388 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:45:06.883905    9388 out.go:298] Setting JSON to false
	I0229 02:45:06.887723    9388 start.go:129] hostinfo: {"hostname":"minikube5","uptime":270933,"bootTime":1708903773,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0229 02:45:06.887723    9388 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 02:45:06.888924    9388 out.go:177] * [test-preload-103800] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0229 02:45:06.889599    9388 notify.go:220] Checking for updates...
	I0229 02:45:06.890190    9388 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 02:45:06.890688    9388 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 02:45:06.891644    9388 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0229 02:45:06.892143    9388 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 02:45:06.892722    9388 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 02:45:06.893819    9388 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 02:45:11.957682    9388 out.go:177] * Using the hyperv driver based on user configuration
	I0229 02:45:11.958314    9388 start.go:299] selected driver: hyperv
	I0229 02:45:11.958314    9388 start.go:903] validating driver "hyperv" against <nil>
	I0229 02:45:11.958413    9388 start.go:914] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 02:45:12.005378    9388 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 02:45:12.006612    9388 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 02:45:12.006612    9388 cni.go:84] Creating CNI manager for ""
	I0229 02:45:12.006612    9388 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0229 02:45:12.006612    9388 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0229 02:45:12.007016    9388 start_flags.go:323] config:
	{Name:test-preload-103800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-103800 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:45:12.007275    9388 iso.go:125] acquiring lock: {Name:mk91f2ee29fbed5605669750e8cfa308a1229357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:45:12.008736    9388 out.go:177] * Starting control plane node test-preload-103800 in cluster test-preload-103800
	I0229 02:45:12.009066    9388 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0229 02:45:12.009502    9388 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I0229 02:45:12.009502    9388 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.24.4 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.24.4
	I0229 02:45:12.009502    9388 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.24.4 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.24.4
	I0229 02:45:12.009502    9388 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.5.3-0 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.5.3-0
	I0229 02:45:12.009502    9388 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.24.4 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.24.4
	I0229 02:45:12.009502    9388 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.24.4 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.24.4
	I0229 02:45:12.009870    9388 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\test-preload-103800\config.json ...
	I0229 02:45:12.009502    9388 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.8.6 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.8.6
	I0229 02:45:12.009502    9388 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.7 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.7
	I0229 02:45:12.009959    9388 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\test-preload-103800\config.json: {Name:mkb422ae239409086a3dd6eeb10d709bcc937ab8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:45:12.011197    9388 start.go:365] acquiring machines lock for test-preload-103800: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 02:45:12.011603    9388 start.go:369] acquired machines lock for "test-preload-103800" in 406.8µs
	I0229 02:45:12.011858    9388 start.go:93] Provisioning new machine with config: &{Name:test-preload-103800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-103800 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 02:45:12.012113    9388 start.go:125] createHost starting for "" (driver="hyperv")
	I0229 02:45:12.012820    9388 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0229 02:45:12.013413    9388 start.go:159] libmachine.API.Create for "test-preload-103800" (driver="hyperv")
	I0229 02:45:12.013864    9388 client.go:168] LocalClient.Create starting
	I0229 02:45:12.013864    9388 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0229 02:45:12.014477    9388 main.go:141] libmachine: Decoding PEM data...
	I0229 02:45:12.014477    9388 main.go:141] libmachine: Parsing certificate...
	I0229 02:45:12.014477    9388 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0229 02:45:12.015103    9388 main.go:141] libmachine: Decoding PEM data...
	I0229 02:45:12.015103    9388 main.go:141] libmachine: Parsing certificate...
	I0229 02:45:12.015103    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0229 02:45:12.169744    9388 cache.go:107] acquiring lock: {Name:mkb44294673347787e75a41439d5f76c37c66ee8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:45:12.169744    9388 cache.go:107] acquiring lock: {Name:mk988f30165cfa9745c00011966aa95fb5bc996c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:45:12.170385    9388 cache.go:107] acquiring lock: {Name:mk757d611ae17564ab6b4dcc7ae88b7962a664c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:45:12.171171    9388 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0229 02:45:12.171171    9388 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0229 02:45:12.171171    9388 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0229 02:45:12.172579    9388 cache.go:107] acquiring lock: {Name:mke8bd9ef2aaa63c3aa92185910efe1ef770c1ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:45:12.172579    9388 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0229 02:45:12.173584    9388 cache.go:107] acquiring lock: {Name:mk947ed43b2ff54c3b9e52b3e879fe0ac6d5deda Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:45:12.173721    9388 cache.go:107] acquiring lock: {Name:mk8ec659182956df1e90c660c324c5574eeb1cee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:45:12.173690    9388 cache.go:107] acquiring lock: {Name:mkf591c6d3e487f0de3d0958b14b079ef11d3a1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:45:12.173974    9388 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0229 02:45:12.173974    9388 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0229 02:45:12.174567    9388 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:45:12.174637    9388 cache.go:107] acquiring lock: {Name:mk24427f98f28a8def2c1041b59b1ceaf9066489 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:45:12.174720    9388 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0229 02:45:12.191871    9388 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0229 02:45:12.191871    9388 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0229 02:45:12.191871    9388 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0229 02:45:12.191871    9388 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0229 02:45:12.191871    9388 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:45:12.192884    9388 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0229 02:45:12.198862    9388 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0229 02:45:12.203990    9388 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	W0229 02:45:12.311687    9388 image.go:187] authn lookup for registry.k8s.io/kube-controller-manager:v1.24.4 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0229 02:45:12.389450    9388 image.go:187] authn lookup for registry.k8s.io/kube-scheduler:v1.24.4 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0229 02:45:12.469566    9388 image.go:187] authn lookup for registry.k8s.io/etcd:3.5.3-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0229 02:45:12.575444    9388 image.go:187] authn lookup for registry.k8s.io/kube-proxy:v1.24.4 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0229 02:45:12.667162    9388 image.go:187] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0229 02:45:12.740412    9388 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.24.4
	W0229 02:45:12.744200    9388 image.go:187] authn lookup for registry.k8s.io/coredns/coredns:v1.8.6 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0229 02:45:12.748130    9388 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.24.4
	I0229 02:45:12.788222    9388 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.5.3-0
	I0229 02:45:12.815345    9388 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.24.4
	W0229 02:45:12.823202    9388 image.go:187] authn lookup for registry.k8s.io/kube-apiserver:v1.24.4 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0229 02:45:12.900443    9388 image.go:187] authn lookup for registry.k8s.io/pause:3.7 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0229 02:45:12.971930    9388 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.8.6
	I0229 02:45:13.047955    9388 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I0229 02:45:13.060150    9388 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.24.4
	I0229 02:45:13.135069    9388 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.7
	I0229 02:45:13.257755    9388 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I0229 02:45:13.257755    9388 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 1.2481832s
	I0229 02:45:13.258772    9388 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I0229 02:45:13.335698    9388 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.7 exists
	I0229 02:45:13.335966    9388 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.7" took 1.3259329s
	I0229 02:45:13.336068    9388 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.7 succeeded
	I0229 02:45:13.950805    9388 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.24.4 exists
	I0229 02:45:13.950805    9388 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.24.4" took 1.9408271s
	I0229 02:45:13.950805    9388 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.24.4 succeeded
	I0229 02:45:14.072987    9388 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.24.4 exists
	I0229 02:45:14.072987    9388 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.24.4" took 2.0633694s
	I0229 02:45:14.073437    9388 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.24.4 succeeded
	I0229 02:45:14.154380    9388 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0229 02:45:14.154380    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:45:14.154531    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0229 02:45:14.502393    9388 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.8.6 exists
	I0229 02:45:14.502393    9388 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.8.6" took 2.4922946s
	I0229 02:45:14.502393    9388 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.8.6 succeeded
	I0229 02:45:14.683688    9388 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.24.4 exists
	I0229 02:45:14.683688    9388 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.24.4" took 2.6740365s
	I0229 02:45:14.683688    9388 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.24.4 succeeded
	I0229 02:45:15.293960    9388 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.5.3-0 exists
	I0229 02:45:15.293960    9388 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.5.3-0" took 3.2842743s
	I0229 02:45:15.293960    9388 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.5.3-0 succeeded
	I0229 02:45:15.500701    9388 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.24.4 exists
	I0229 02:45:15.500701    9388 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.24.4" took 3.4906359s
	I0229 02:45:15.500701    9388 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.24.4 succeeded
	I0229 02:45:15.500701    9388 cache.go:87] Successfully saved all images to host disk.
	I0229 02:45:15.905273    9388 main.go:141] libmachine: [stdout =====>] : False
	
	I0229 02:45:15.905273    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:45:15.905273    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0229 02:45:17.313945    9388 main.go:141] libmachine: [stdout =====>] : True
	
	I0229 02:45:17.313945    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:45:17.313945    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0229 02:45:20.796266    9388 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0229 02:45:20.797196    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:45:20.799225    9388 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0229 02:45:21.182986    9388 main.go:141] libmachine: Creating SSH key...
	I0229 02:45:21.373532    9388 main.go:141] libmachine: Creating VM...
	I0229 02:45:21.373532    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0229 02:45:24.049795    9388 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0229 02:45:24.050034    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:45:24.050133    9388 main.go:141] libmachine: Using switch "Default Switch"
	I0229 02:45:24.050238    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0229 02:45:25.762159    9388 main.go:141] libmachine: [stdout =====>] : True
	
	I0229 02:45:25.762159    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:45:25.762244    9388 main.go:141] libmachine: Creating VHD
	I0229 02:45:25.762244    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\test-preload-103800\fixed.vhd' -SizeBytes 10MB -Fixed
	I0229 02:45:29.405050    9388 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\test-preload-103800\fixed.
	                          vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : A8F85CAC-F6D4-4E78-B597-2C5AC2A63D0F
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0229 02:45:29.405050    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:45:29.405050    9388 main.go:141] libmachine: Writing magic tar header
	I0229 02:45:29.405050    9388 main.go:141] libmachine: Writing SSH key tar header
	I0229 02:45:29.417622    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\test-preload-103800\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\test-preload-103800\disk.vhd' -VHDType Dynamic -DeleteSource
	I0229 02:45:32.463515    9388 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:45:32.463515    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:45:32.463515    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\test-preload-103800\disk.vhd' -SizeBytes 20000MB
	I0229 02:45:34.922196    9388 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:45:34.922196    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:45:34.922286    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM test-preload-103800 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\test-preload-103800' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0229 02:45:38.332066    9388 main.go:141] libmachine: [stdout =====>] : 
	Name                State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----                ----- ----------- ----------------- ------   ------             -------
	test-preload-103800 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0229 02:45:38.332066    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:45:38.332066    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName test-preload-103800 -DynamicMemoryEnabled $false
	I0229 02:45:40.460969    9388 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:45:40.460969    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:45:40.460969    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor test-preload-103800 -Count 2
	I0229 02:45:42.536609    9388 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:45:42.537485    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:45:42.537485    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName test-preload-103800 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\test-preload-103800\boot2docker.iso'
	I0229 02:45:44.977839    9388 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:45:44.977839    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:45:44.977978    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName test-preload-103800 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\test-preload-103800\disk.vhd'
	I0229 02:45:47.426154    9388 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:45:47.426634    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:45:47.426634    9388 main.go:141] libmachine: Starting VM...
	I0229 02:45:47.426634    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM test-preload-103800
	I0229 02:45:50.129159    9388 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:45:50.129247    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:45:50.129300    9388 main.go:141] libmachine: Waiting for host to start...
	I0229 02:45:50.129300    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-103800 ).state
	I0229 02:45:52.257533    9388 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:45:52.258497    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:45:52.258662    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-103800 ).networkadapters[0]).ipaddresses[0]
	I0229 02:45:54.651513    9388 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:45:54.652053    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:45:55.653683    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-103800 ).state
	I0229 02:45:57.728780    9388 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:45:57.728780    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:45:57.728780    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-103800 ).networkadapters[0]).ipaddresses[0]
	I0229 02:46:00.103981    9388 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:46:00.104122    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:46:01.113655    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-103800 ).state
	I0229 02:46:03.186253    9388 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:46:03.186253    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:46:03.186253    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-103800 ).networkadapters[0]).ipaddresses[0]
	I0229 02:46:05.560845    9388 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:46:05.561678    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:46:06.573262    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-103800 ).state
	I0229 02:46:08.648443    9388 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:46:08.649235    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:46:08.649316    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-103800 ).networkadapters[0]).ipaddresses[0]
	I0229 02:46:11.006359    9388 main.go:141] libmachine: [stdout =====>] : 
	I0229 02:46:11.007145    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:46:12.018951    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-103800 ).state
	I0229 02:46:14.080982    9388 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:46:14.081171    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:46:14.081171    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-103800 ).networkadapters[0]).ipaddresses[0]
	I0229 02:46:16.532159    9388 main.go:141] libmachine: [stdout =====>] : 172.19.4.219
	
	I0229 02:46:16.532159    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:46:16.532159    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-103800 ).state
	I0229 02:46:18.611805    9388 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:46:18.611805    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:46:18.611805    9388 machine.go:88] provisioning docker machine ...
	I0229 02:46:18.611805    9388 buildroot.go:166] provisioning hostname "test-preload-103800"
	I0229 02:46:18.611805    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-103800 ).state
	I0229 02:46:20.675019    9388 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:46:20.675695    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:46:20.675695    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-103800 ).networkadapters[0]).ipaddresses[0]
	I0229 02:46:23.127548    9388 main.go:141] libmachine: [stdout =====>] : 172.19.4.219
	
	I0229 02:46:23.127871    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:46:23.132737    9388 main.go:141] libmachine: Using SSH client type: native
	I0229 02:46:23.143298    9388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.4.219 22 <nil> <nil>}
	I0229 02:46:23.143298    9388 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-103800 && echo "test-preload-103800" | sudo tee /etc/hostname
	I0229 02:46:23.315578    9388 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-103800
	
	I0229 02:46:23.315578    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-103800 ).state
	I0229 02:46:25.301229    9388 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:46:25.301446    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:46:25.301446    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-103800 ).networkadapters[0]).ipaddresses[0]
	I0229 02:46:27.695055    9388 main.go:141] libmachine: [stdout =====>] : 172.19.4.219
	
	I0229 02:46:27.695055    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:46:27.699640    9388 main.go:141] libmachine: Using SSH client type: native
	I0229 02:46:27.700282    9388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.4.219 22 <nil> <nil>}
	I0229 02:46:27.700282    9388 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-103800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-103800/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-103800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:46:27.866265    9388 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:46:27.866265    9388 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0229 02:46:27.866265    9388 buildroot.go:174] setting up certificates
	I0229 02:46:27.866265    9388 provision.go:83] configureAuth start
	I0229 02:46:27.866265    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-103800 ).state
	I0229 02:46:29.888116    9388 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:46:29.888116    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:46:29.888375    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-103800 ).networkadapters[0]).ipaddresses[0]
	I0229 02:46:32.304516    9388 main.go:141] libmachine: [stdout =====>] : 172.19.4.219
	
	I0229 02:46:32.304601    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:46:32.304665    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-103800 ).state
	I0229 02:46:34.337846    9388 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:46:34.337846    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:46:34.337923    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-103800 ).networkadapters[0]).ipaddresses[0]
	I0229 02:46:36.754560    9388 main.go:141] libmachine: [stdout =====>] : 172.19.4.219
	
	I0229 02:46:36.754720    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:46:36.754720    9388 provision.go:138] copyHostCerts
	I0229 02:46:36.755454    9388 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0229 02:46:36.755525    9388 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0229 02:46:36.755655    9388 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0229 02:46:36.757282    9388 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0229 02:46:36.757282    9388 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0229 02:46:36.757282    9388 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0229 02:46:36.758475    9388 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0229 02:46:36.758475    9388 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0229 02:46:36.758558    9388 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1675 bytes)
	I0229 02:46:36.759160    9388 provision.go:112] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.test-preload-103800 san=[172.19.4.219 172.19.4.219 localhost 127.0.0.1 minikube test-preload-103800]
	I0229 02:46:36.961139    9388 provision.go:172] copyRemoteCerts
	I0229 02:46:36.972150    9388 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:46:36.972150    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-103800 ).state
	I0229 02:46:38.985215    9388 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:46:38.985215    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:46:38.985215    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-103800 ).networkadapters[0]).ipaddresses[0]
	I0229 02:46:41.398974    9388 main.go:141] libmachine: [stdout =====>] : 172.19.4.219
	
	I0229 02:46:41.398974    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:46:41.399481    9388 sshutil.go:53] new ssh client: &{IP:172.19.4.219 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\test-preload-103800\id_rsa Username:docker}
	I0229 02:46:41.514014    9388 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5416096s)
	I0229 02:46:41.514014    9388 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 02:46:41.564031    9388 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1233 bytes)
	I0229 02:46:41.610622    9388 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 02:46:41.658295    9388 provision.go:86] duration metric: configureAuth took 13.7912583s
	I0229 02:46:41.658369    9388 buildroot.go:189] setting minikube options for container-runtime
	I0229 02:46:41.658883    9388 config.go:182] Loaded profile config "test-preload-103800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.24.4
	I0229 02:46:41.658883    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-103800 ).state
	I0229 02:46:43.661574    9388 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:46:43.661574    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:46:43.661794    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-103800 ).networkadapters[0]).ipaddresses[0]
	I0229 02:46:46.089528    9388 main.go:141] libmachine: [stdout =====>] : 172.19.4.219
	
	I0229 02:46:46.089528    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:46:46.094215    9388 main.go:141] libmachine: Using SSH client type: native
	I0229 02:46:46.094215    9388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.4.219 22 <nil> <nil>}
	I0229 02:46:46.094215    9388 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 02:46:46.235284    9388 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0229 02:46:46.235284    9388 buildroot.go:70] root file system type: tmpfs
	I0229 02:46:46.235542    9388 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 02:46:46.235626    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-103800 ).state
	I0229 02:46:48.270954    9388 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:46:48.271226    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:46:48.271226    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-103800 ).networkadapters[0]).ipaddresses[0]
	I0229 02:46:50.688353    9388 main.go:141] libmachine: [stdout =====>] : 172.19.4.219
	
	I0229 02:46:50.688353    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:46:50.692580    9388 main.go:141] libmachine: Using SSH client type: native
	I0229 02:46:50.693397    9388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.4.219 22 <nil> <nil>}
	I0229 02:46:50.693397    9388 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 02:46:50.861207    9388 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 02:46:50.861793    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-103800 ).state
	I0229 02:46:52.832835    9388 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:46:52.833153    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:46:52.833242    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-103800 ).networkadapters[0]).ipaddresses[0]
	I0229 02:46:55.260073    9388 main.go:141] libmachine: [stdout =====>] : 172.19.4.219
	
	I0229 02:46:55.260073    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:46:55.263962    9388 main.go:141] libmachine: Using SSH client type: native
	I0229 02:46:55.264603    9388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.4.219 22 <nil> <nil>}
	I0229 02:46:55.264603    9388 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 02:46:56.377523    9388 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0229 02:46:56.377602    9388 machine.go:91] provisioned docker machine in 37.7636866s
	I0229 02:46:56.377602    9388 client.go:171] LocalClient.Create took 1m44.3579047s
	I0229 02:46:56.377675    9388 start.go:167] duration metric: libmachine.API.Create for "test-preload-103800" took 1m44.3583555s
	I0229 02:46:56.377675    9388 start.go:300] post-start starting for "test-preload-103800" (driver="hyperv")
	I0229 02:46:56.377675    9388 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:46:56.387451    9388 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:46:56.387451    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-103800 ).state
	I0229 02:46:58.398093    9388 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:46:58.398389    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:46:58.398389    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-103800 ).networkadapters[0]).ipaddresses[0]
	I0229 02:47:00.784955    9388 main.go:141] libmachine: [stdout =====>] : 172.19.4.219
	
	I0229 02:47:00.785023    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:47:00.785023    9388 sshutil.go:53] new ssh client: &{IP:172.19.4.219 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\test-preload-103800\id_rsa Username:docker}
	I0229 02:47:00.905588    9388 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5178848s)
	I0229 02:47:00.914576    9388 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:47:00.922368    9388 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 02:47:00.922368    9388 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0229 02:47:00.922368    9388 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0229 02:47:00.923472    9388 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem -> 33122.pem in /etc/ssl/certs
	I0229 02:47:00.931884    9388 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 02:47:00.952072    9388 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem --> /etc/ssl/certs/33122.pem (1708 bytes)
	I0229 02:47:01.014007    9388 start.go:303] post-start completed in 4.6360733s
	I0229 02:47:01.016753    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-103800 ).state
	I0229 02:47:03.031928    9388 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:47:03.032401    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:47:03.032401    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-103800 ).networkadapters[0]).ipaddresses[0]
	I0229 02:47:05.440522    9388 main.go:141] libmachine: [stdout =====>] : 172.19.4.219
	
	I0229 02:47:05.440522    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:47:05.440522    9388 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\test-preload-103800\config.json ...
	I0229 02:47:05.443037    9388 start.go:128] duration metric: createHost completed in 1m53.4245825s
	I0229 02:47:05.443037    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-103800 ).state
	I0229 02:47:07.435125    9388 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:47:07.435125    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:47:07.435125    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-103800 ).networkadapters[0]).ipaddresses[0]
	I0229 02:47:09.820451    9388 main.go:141] libmachine: [stdout =====>] : 172.19.4.219
	
	I0229 02:47:09.820530    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:47:09.826463    9388 main.go:141] libmachine: Using SSH client type: native
	I0229 02:47:09.826463    9388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.4.219 22 <nil> <nil>}
	I0229 02:47:09.826463    9388 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0229 02:47:09.967947    9388 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709174830.132488304
	
	I0229 02:47:09.968024    9388 fix.go:206] guest clock: 1709174830.132488304
	I0229 02:47:09.968115    9388 fix.go:219] Guest: 2024-02-29 02:47:10.132488304 +0000 UTC Remote: 2024-02-29 02:47:05.4430372 +0000 UTC m=+118.712443901 (delta=4.689451104s)
	I0229 02:47:09.968221    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-103800 ).state
	I0229 02:47:11.990527    9388 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:47:11.990527    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:47:11.990602    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-103800 ).networkadapters[0]).ipaddresses[0]
	I0229 02:47:14.437916    9388 main.go:141] libmachine: [stdout =====>] : 172.19.4.219
	
	I0229 02:47:14.437916    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:47:14.442094    9388 main.go:141] libmachine: Using SSH client type: native
	I0229 02:47:14.442520    9388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.4.219 22 <nil> <nil>}
	I0229 02:47:14.442520    9388 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709174829
	I0229 02:47:14.589092    9388 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Feb 29 02:47:09 UTC 2024
	
	I0229 02:47:14.589092    9388 fix.go:226] clock set: Thu Feb 29 02:47:09 UTC 2024
	 (err=<nil>)
	I0229 02:47:14.589092    9388 start.go:83] releasing machines lock for "test-preload-103800", held for 2m2.5706368s
	I0229 02:47:14.589092    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-103800 ).state
	I0229 02:47:16.599635    9388 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:47:16.599635    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:47:16.600181    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-103800 ).networkadapters[0]).ipaddresses[0]
	I0229 02:47:19.015602    9388 main.go:141] libmachine: [stdout =====>] : 172.19.4.219
	
	I0229 02:47:19.015602    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:47:19.018804    9388 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:47:19.018804    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-103800 ).state
	I0229 02:47:19.028141    9388 ssh_runner.go:195] Run: cat /version.json
	I0229 02:47:19.028141    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-103800 ).state
	I0229 02:47:21.090980    9388 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:47:21.090980    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:47:21.091079    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-103800 ).networkadapters[0]).ipaddresses[0]
	I0229 02:47:21.091488    9388 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:47:21.091584    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:47:21.091584    9388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-103800 ).networkadapters[0]).ipaddresses[0]
	I0229 02:47:23.585548    9388 main.go:141] libmachine: [stdout =====>] : 172.19.4.219
	
	I0229 02:47:23.585548    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:47:23.586416    9388 sshutil.go:53] new ssh client: &{IP:172.19.4.219 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\test-preload-103800\id_rsa Username:docker}
	I0229 02:47:23.609782    9388 main.go:141] libmachine: [stdout =====>] : 172.19.4.219
	
	I0229 02:47:23.609848    9388 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:47:23.609848    9388 sshutil.go:53] new ssh client: &{IP:172.19.4.219 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\test-preload-103800\id_rsa Username:docker}
	I0229 02:47:23.809859    9388 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.7907869s)
	I0229 02:47:23.809963    9388 ssh_runner.go:235] Completed: cat /version.json: (4.7815552s)
	I0229 02:47:23.819079    9388 ssh_runner.go:195] Run: systemctl --version
	I0229 02:47:23.837501    9388 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 02:47:23.847316    9388 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 02:47:23.857498    9388 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:47:23.888062    9388 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 02:47:23.888173    9388 start.go:475] detecting cgroup driver to use...
	I0229 02:47:23.888416    9388 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:47:23.939109    9388 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0229 02:47:23.969845    9388 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 02:47:23.991599    9388 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 02:47:24.001353    9388 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 02:47:24.030630    9388 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 02:47:24.061872    9388 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 02:47:24.091264    9388 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 02:47:24.122678    9388 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:47:24.164277    9388 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 02:47:24.196504    9388 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:47:24.225354    9388 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:47:24.255026    9388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:47:24.462528    9388 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 02:47:24.493552    9388 start.go:475] detecting cgroup driver to use...
	I0229 02:47:24.502865    9388 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 02:47:24.539541    9388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:47:24.569553    9388 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 02:47:24.620725    9388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:47:24.662532    9388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 02:47:24.694534    9388 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 02:47:24.750537    9388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 02:47:24.778253    9388 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:47:24.824983    9388 ssh_runner.go:195] Run: which cri-dockerd
	I0229 02:47:24.841539    9388 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 02:47:24.859879    9388 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0229 02:47:24.906221    9388 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 02:47:25.113761    9388 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 02:47:25.311765    9388 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 02:47:25.312026    9388 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 02:47:25.362989    9388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:47:25.569169    9388 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 02:48:26.681271    9388 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1085908s)
	I0229 02:48:26.692399    9388 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0229 02:48:26.727942    9388 out.go:177] 
	W0229 02:48:26.728915    9388 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Feb 29 02:46:55 test-preload-103800 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 02:46:56 test-preload-103800 dockerd[648]: time="2024-02-29T02:46:56.051466823Z" level=info msg="Starting up"
	Feb 29 02:46:56 test-preload-103800 dockerd[648]: time="2024-02-29T02:46:56.052279335Z" level=info msg="containerd not running, starting managed containerd"
	Feb 29 02:46:56 test-preload-103800 dockerd[648]: time="2024-02-29T02:46:56.053412493Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=654
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.089669218Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.117731307Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.117812019Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.117899931Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.118023148Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.118129263Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.118245679Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.118581125Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.118609329Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.118626632Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.118639633Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.118737547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.119155905Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.122285939Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.122472765Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.122702196Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.122821413Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.122946730Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.123106552Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.123209567Z" level=info msg="metadata content store policy set" policy=shared
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.133975659Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.134146683Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.134176487Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.134438623Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.134465727Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.134657954Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.135662893Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.135871622Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136014642Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136042746Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136066449Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136089252Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136109655Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136133058Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136157161Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136186465Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136219370Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136243173Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136275678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136408196Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136490208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136565518Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136600923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136630127Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136650430Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136678134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136699937Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136725040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136744643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136778047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136801751Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136827554Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136859159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136879061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136899164Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136979675Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.137029682Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.137048585Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.137067188Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.137238511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.137384431Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.137410135Z" level=info msg="NRI interface is disabled by configuration."
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.137897703Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.138116533Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.138362867Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.138398972Z" level=info msg="containerd successfully booted in 0.049962s"
	Feb 29 02:46:56 test-preload-103800 dockerd[648]: time="2024-02-29T02:46:56.186295811Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 29 02:46:56 test-preload-103800 dockerd[648]: time="2024-02-29T02:46:56.203298268Z" level=info msg="Loading containers: start."
	Feb 29 02:46:56 test-preload-103800 dockerd[648]: time="2024-02-29T02:46:56.461620933Z" level=info msg="Loading containers: done."
	Feb 29 02:46:56 test-preload-103800 dockerd[648]: time="2024-02-29T02:46:56.479375015Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Feb 29 02:46:56 test-preload-103800 dockerd[648]: time="2024-02-29T02:46:56.479573242Z" level=info msg="Daemon has completed initialization"
	Feb 29 02:46:56 test-preload-103800 dockerd[648]: time="2024-02-29T02:46:56.540407146Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 29 02:46:56 test-preload-103800 systemd[1]: Started Docker Application Container Engine.
	Feb 29 02:46:56 test-preload-103800 dockerd[648]: time="2024-02-29T02:46:56.540566268Z" level=info msg="API listen on [::]:2376"
	Feb 29 02:47:25 test-preload-103800 dockerd[648]: time="2024-02-29T02:47:25.764898151Z" level=info msg="Processing signal 'terminated'"
	Feb 29 02:47:25 test-preload-103800 systemd[1]: Stopping Docker Application Container Engine...
	Feb 29 02:47:25 test-preload-103800 dockerd[648]: time="2024-02-29T02:47:25.766940102Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: read unix @->/var/run/docker/containerd/containerd.sock: use of closed network connection" module=libcontainerd namespace=moby
	Feb 29 02:47:25 test-preload-103800 dockerd[648]: time="2024-02-29T02:47:25.766978603Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=moby
	Feb 29 02:47:25 test-preload-103800 dockerd[648]: time="2024-02-29T02:47:25.767014604Z" level=warning msg="Error while testing if containerd API is ready" error="rpc error: code = Canceled desc = grpc: the client connection is closing"
	Feb 29 02:47:25 test-preload-103800 dockerd[648]: time="2024-02-29T02:47:25.767195808Z" level=info msg="Daemon shutdown complete"
	Feb 29 02:47:25 test-preload-103800 dockerd[648]: time="2024-02-29T02:47:25.767249610Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Feb 29 02:47:25 test-preload-103800 dockerd[648]: time="2024-02-29T02:47:25.767265610Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Feb 29 02:47:25 test-preload-103800 dockerd[648]: time="2024-02-29T02:47:25.767278811Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Feb 29 02:47:26 test-preload-103800 systemd[1]: docker.service: Deactivated successfully.
	Feb 29 02:47:26 test-preload-103800 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 02:47:26 test-preload-103800 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 02:47:26 test-preload-103800 dockerd[990]: time="2024-02-29T02:47:26.837123059Z" level=info msg="Starting up"
	Feb 29 02:48:26 test-preload-103800 dockerd[990]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Feb 29 02:48:26 test-preload-103800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Feb 29 02:48:26 test-preload-103800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Feb 29 02:48:26 test-preload-103800 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Feb 29 02:46:55 test-preload-103800 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 02:46:56 test-preload-103800 dockerd[648]: time="2024-02-29T02:46:56.051466823Z" level=info msg="Starting up"
	Feb 29 02:46:56 test-preload-103800 dockerd[648]: time="2024-02-29T02:46:56.052279335Z" level=info msg="containerd not running, starting managed containerd"
	Feb 29 02:46:56 test-preload-103800 dockerd[648]: time="2024-02-29T02:46:56.053412493Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=654
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.089669218Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.117731307Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.117812019Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.117899931Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.118023148Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.118129263Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.118245679Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.118581125Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.118609329Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.118626632Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.118639633Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.118737547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.119155905Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.122285939Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.122472765Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.122702196Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.122821413Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.122946730Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.123106552Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.123209567Z" level=info msg="metadata content store policy set" policy=shared
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.133975659Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.134146683Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.134176487Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.134438623Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.134465727Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.134657954Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.135662893Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.135871622Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136014642Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136042746Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136066449Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136089252Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136109655Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136133058Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136157161Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136186465Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136219370Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136243173Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136275678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136408196Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136490208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136565518Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136600923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136630127Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136650430Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136678134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136699937Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136725040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136744643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136778047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136801751Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136827554Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136859159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136879061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136899164Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.136979675Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.137029682Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.137048585Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.137067188Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.137238511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.137384431Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.137410135Z" level=info msg="NRI interface is disabled by configuration."
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.137897703Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.138116533Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.138362867Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Feb 29 02:46:56 test-preload-103800 dockerd[654]: time="2024-02-29T02:46:56.138398972Z" level=info msg="containerd successfully booted in 0.049962s"
	Feb 29 02:46:56 test-preload-103800 dockerd[648]: time="2024-02-29T02:46:56.186295811Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 29 02:46:56 test-preload-103800 dockerd[648]: time="2024-02-29T02:46:56.203298268Z" level=info msg="Loading containers: start."
	Feb 29 02:46:56 test-preload-103800 dockerd[648]: time="2024-02-29T02:46:56.461620933Z" level=info msg="Loading containers: done."
	Feb 29 02:46:56 test-preload-103800 dockerd[648]: time="2024-02-29T02:46:56.479375015Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Feb 29 02:46:56 test-preload-103800 dockerd[648]: time="2024-02-29T02:46:56.479573242Z" level=info msg="Daemon has completed initialization"
	Feb 29 02:46:56 test-preload-103800 dockerd[648]: time="2024-02-29T02:46:56.540407146Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 29 02:46:56 test-preload-103800 systemd[1]: Started Docker Application Container Engine.
	Feb 29 02:46:56 test-preload-103800 dockerd[648]: time="2024-02-29T02:46:56.540566268Z" level=info msg="API listen on [::]:2376"
	Feb 29 02:47:25 test-preload-103800 dockerd[648]: time="2024-02-29T02:47:25.764898151Z" level=info msg="Processing signal 'terminated'"
	Feb 29 02:47:25 test-preload-103800 systemd[1]: Stopping Docker Application Container Engine...
	Feb 29 02:47:25 test-preload-103800 dockerd[648]: time="2024-02-29T02:47:25.766940102Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: read unix @->/var/run/docker/containerd/containerd.sock: use of closed network connection" module=libcontainerd namespace=moby
	Feb 29 02:47:25 test-preload-103800 dockerd[648]: time="2024-02-29T02:47:25.766978603Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=moby
	Feb 29 02:47:25 test-preload-103800 dockerd[648]: time="2024-02-29T02:47:25.767014604Z" level=warning msg="Error while testing if containerd API is ready" error="rpc error: code = Canceled desc = grpc: the client connection is closing"
	Feb 29 02:47:25 test-preload-103800 dockerd[648]: time="2024-02-29T02:47:25.767195808Z" level=info msg="Daemon shutdown complete"
	Feb 29 02:47:25 test-preload-103800 dockerd[648]: time="2024-02-29T02:47:25.767249610Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Feb 29 02:47:25 test-preload-103800 dockerd[648]: time="2024-02-29T02:47:25.767265610Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Feb 29 02:47:25 test-preload-103800 dockerd[648]: time="2024-02-29T02:47:25.767278811Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Feb 29 02:47:26 test-preload-103800 systemd[1]: docker.service: Deactivated successfully.
	Feb 29 02:47:26 test-preload-103800 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 02:47:26 test-preload-103800 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 02:47:26 test-preload-103800 dockerd[990]: time="2024-02-29T02:47:26.837123059Z" level=info msg="Starting up"
	Feb 29 02:48:26 test-preload-103800 dockerd[990]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Feb 29 02:48:26 test-preload-103800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Feb 29 02:48:26 test-preload-103800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Feb 29 02:48:26 test-preload-103800 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0229 02:48:26.728915    9388 out.go:239] * 
	* 
	W0229 02:48:26.730404    9388 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 02:48:26.730984    9388 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-windows-amd64.exe start -p test-preload-103800 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4 failed: exit status 90
panic.go:626: *** TestPreload FAILED at 2024-02-29 02:48:26.9602897 +0000 UTC m=+7566.019264901
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p test-preload-103800 -n test-preload-103800
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p test-preload-103800 -n test-preload-103800: exit status 6 (11.5789485s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 02:48:27.074337   12120 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0229 02:48:38.475767   12120 status.go:415] kubeconfig endpoint: extract IP: "test-preload-103800" does not appear in C:\Users\jenkins.minikube5\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "test-preload-103800" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "test-preload-103800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-103800
E0229 02:49:28.881420    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-103800: (1m2.7332199s)
--- FAIL: TestPreload (274.56s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (929.84s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube-v1.26.0.368674825.exe start -p running-upgrade-537900 --memory=2200 --vm-driver=hyperv
E0229 02:59:28.911305    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube-v1.26.0.368674825.exe start -p running-upgrade-537900 --memory=2200 --vm-driver=hyperv: (6m16.6858225s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-537900 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
E0229 03:07:32.227861    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p running-upgrade-537900 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: exit status 90 (7m43.9691191s)

                                                
                                                
-- stdout --
	* [running-upgrade-537900] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18063
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the hyperv driver based on existing profile
	* Starting control plane node running-upgrade-537900 in cluster running-upgrade-537900
	* Updating the running hyperv "running-upgrade-537900" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 03:05:07.664122    3808 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0229 03:05:07.717821    3808 out.go:291] Setting OutFile to fd 1596 ...
	I0229 03:05:07.718656    3808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 03:05:07.718706    3808 out.go:304] Setting ErrFile to fd 876...
	I0229 03:05:07.718755    3808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 03:05:07.740230    3808 out.go:298] Setting JSON to false
	I0229 03:05:07.744072    3808 start.go:129] hostinfo: {"hostname":"minikube5","uptime":272134,"bootTime":1708903773,"procs":204,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0229 03:05:07.744196    3808 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 03:05:07.745311    3808 out.go:177] * [running-upgrade-537900] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0229 03:05:07.746249    3808 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 03:05:07.746179    3808 notify.go:220] Checking for updates...
	I0229 03:05:07.747700    3808 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 03:05:07.748047    3808 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0229 03:05:07.748781    3808 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 03:05:07.749671    3808 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 03:05:07.750660    3808 config.go:182] Loaded profile config "running-upgrade-537900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0229 03:05:07.752660    3808 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0229 03:05:07.753080    3808 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 03:05:13.340294    3808 out.go:177] * Using the hyperv driver based on existing profile
	I0229 03:05:13.340373    3808 start.go:299] selected driver: hyperv
	I0229 03:05:13.340373    3808 start.go:903] validating driver "hyperv" against &{Name:running-upgrade-537900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:runni
ng-upgrade-537900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.19.10.7 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0229 03:05:13.341010    3808 start.go:914] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 03:05:13.386228    3808 cni.go:84] Creating CNI manager for ""
	I0229 03:05:13.386228    3808 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0229 03:05:13.386228    3808 start_flags.go:323] config:
	{Name:running-upgrade-537900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-537900 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.19.10.7 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0229 03:05:13.386228    3808 iso.go:125] acquiring lock: {Name:mk91f2ee29fbed5605669750e8cfa308a1229357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 03:05:13.388161    3808 out.go:177] * Starting control plane node running-upgrade-537900 in cluster running-upgrade-537900
	I0229 03:05:13.388891    3808 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0229 03:05:13.388891    3808 preload.go:148] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.1-docker-overlay2-amd64.tar.lz4
	I0229 03:05:13.388891    3808 cache.go:56] Caching tarball of preloaded images
	I0229 03:05:13.389712    3808 preload.go:174] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 03:05:13.389864    3808 cache.go:59] Finished verifying existence of preloaded tar for  v1.24.1 on docker
	I0229 03:05:13.390033    3808 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\running-upgrade-537900\config.json ...
	I0229 03:05:13.391947    3808 start.go:365] acquiring machines lock for running-upgrade-537900: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 03:10:22.941869    3808 start.go:369] acquired machines lock for "running-upgrade-537900" in 5m9.532668s
	I0229 03:10:22.941869    3808 start.go:96] Skipping create...Using existing machine configuration
	I0229 03:10:22.941869    3808 fix.go:54] fixHost starting: 
	I0229 03:10:22.942370    3808 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-537900 ).state
	I0229 03:10:25.021829    3808 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 03:10:25.021829    3808 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:10:25.021829    3808 fix.go:102] recreateIfNeeded on running-upgrade-537900: state=Running err=<nil>
	W0229 03:10:25.021829    3808 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 03:10:25.022500    3808 out.go:177] * Updating the running hyperv "running-upgrade-537900" VM ...
	I0229 03:10:25.023485    3808 machine.go:88] provisioning docker machine ...
	I0229 03:10:25.023485    3808 buildroot.go:166] provisioning hostname "running-upgrade-537900"
	I0229 03:10:25.023485    3808 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-537900 ).state
	I0229 03:10:27.121406    3808 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 03:10:27.121406    3808 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:10:27.121406    3808 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-537900 ).networkadapters[0]).ipaddresses[0]
	I0229 03:10:29.669633    3808 main.go:141] libmachine: [stdout =====>] : 172.19.10.7
	
	I0229 03:10:29.669633    3808 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:10:29.674915    3808 main.go:141] libmachine: Using SSH client type: native
	I0229 03:10:29.675514    3808 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.10.7 22 <nil> <nil>}
	I0229 03:10:29.675514    3808 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-537900 && echo "running-upgrade-537900" | sudo tee /etc/hostname
	I0229 03:10:29.871032    3808 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-537900
	
	I0229 03:10:29.871032    3808 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-537900 ).state
	I0229 03:10:32.010569    3808 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 03:10:32.010569    3808 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:10:32.010569    3808 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-537900 ).networkadapters[0]).ipaddresses[0]
	I0229 03:10:34.373378    3808 main.go:141] libmachine: [stdout =====>] : 172.19.10.7
	
	I0229 03:10:34.374067    3808 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:10:34.380664    3808 main.go:141] libmachine: Using SSH client type: native
	I0229 03:10:34.380772    3808 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.10.7 22 <nil> <nil>}
	I0229 03:10:34.380772    3808 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-537900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-537900/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-537900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 03:10:34.531621    3808 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 03:10:34.531704    3808 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0229 03:10:34.531768    3808 buildroot.go:174] setting up certificates
	I0229 03:10:34.531768    3808 provision.go:83] configureAuth start
	I0229 03:10:34.531837    3808 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-537900 ).state
	I0229 03:10:36.572960    3808 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 03:10:36.572960    3808 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:10:36.572960    3808 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-537900 ).networkadapters[0]).ipaddresses[0]
	I0229 03:10:39.162832    3808 main.go:141] libmachine: [stdout =====>] : 172.19.10.7
	
	I0229 03:10:39.162985    3808 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:10:39.163096    3808 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-537900 ).state
	I0229 03:10:41.233882    3808 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 03:10:41.233882    3808 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:10:41.233882    3808 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-537900 ).networkadapters[0]).ipaddresses[0]
	I0229 03:10:43.764634    3808 main.go:141] libmachine: [stdout =====>] : 172.19.10.7
	
	I0229 03:10:43.764690    3808 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:10:43.764690    3808 provision.go:138] copyHostCerts
	I0229 03:10:43.764690    3808 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0229 03:10:43.764690    3808 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0229 03:10:43.765390    3808 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0229 03:10:43.766501    3808 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0229 03:10:43.766501    3808 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0229 03:10:43.766582    3808 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1675 bytes)
	I0229 03:10:43.767918    3808 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0229 03:10:43.767918    3808 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0229 03:10:43.768346    3808 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0229 03:10:43.769580    3808 provision.go:112] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.running-upgrade-537900 san=[172.19.10.7 172.19.10.7 localhost 127.0.0.1 minikube running-upgrade-537900]
	I0229 03:10:43.884675    3808 provision.go:172] copyRemoteCerts
	I0229 03:10:43.893678    3808 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 03:10:43.893678    3808 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-537900 ).state
	I0229 03:10:45.980289    3808 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 03:10:45.980289    3808 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:10:45.980289    3808 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-537900 ).networkadapters[0]).ipaddresses[0]
	I0229 03:10:48.518403    3808 main.go:141] libmachine: [stdout =====>] : 172.19.10.7
	
	I0229 03:10:48.518472    3808 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:10:48.518998    3808 sshutil.go:53] new ssh client: &{IP:172.19.10.7 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\running-upgrade-537900\id_rsa Username:docker}
	I0229 03:10:48.638080    3808 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7441383s)
	I0229 03:10:48.638080    3808 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 03:10:48.679513    3808 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1241 bytes)
	I0229 03:10:48.725238    3808 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 03:10:48.764240    3808 provision.go:86] duration metric: configureAuth took 14.2316805s
	I0229 03:10:48.764240    3808 buildroot.go:189] setting minikube options for container-runtime
	I0229 03:10:48.764881    3808 config.go:182] Loaded profile config "running-upgrade-537900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0229 03:10:48.764881    3808 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-537900 ).state
	I0229 03:10:50.883060    3808 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 03:10:50.883418    3808 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:10:50.883418    3808 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-537900 ).networkadapters[0]).ipaddresses[0]
	I0229 03:10:53.368089    3808 main.go:141] libmachine: [stdout =====>] : 172.19.10.7
	
	I0229 03:10:53.368089    3808 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:10:53.372248    3808 main.go:141] libmachine: Using SSH client type: native
	I0229 03:10:53.373033    3808 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.10.7 22 <nil> <nil>}
	I0229 03:10:53.373033    3808 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 03:10:53.539808    3808 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0229 03:10:53.539808    3808 buildroot.go:70] root file system type: tmpfs
	I0229 03:10:53.539808    3808 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 03:10:53.541143    3808 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-537900 ).state
	I0229 03:10:55.721155    3808 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 03:10:55.721155    3808 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:10:55.721155    3808 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-537900 ).networkadapters[0]).ipaddresses[0]
	I0229 03:10:58.210235    3808 main.go:141] libmachine: [stdout =====>] : 172.19.10.7
	
	I0229 03:10:58.210569    3808 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:10:58.218324    3808 main.go:141] libmachine: Using SSH client type: native
	I0229 03:10:58.218963    3808 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.10.7 22 <nil> <nil>}
	I0229 03:10:58.218963    3808 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 03:10:58.392819    3808 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 03:10:58.392819    3808 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-537900 ).state
	I0229 03:11:00.442072    3808 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 03:11:00.442194    3808 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:11:00.442194    3808 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-537900 ).networkadapters[0]).ipaddresses[0]
	I0229 03:11:02.956710    3808 main.go:141] libmachine: [stdout =====>] : 172.19.10.7
	
	I0229 03:11:02.956710    3808 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:11:02.960710    3808 main.go:141] libmachine: Using SSH client type: native
	I0229 03:11:02.960710    3808 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.10.7 22 <nil> <nil>}
	I0229 03:11:02.961711    3808 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 03:11:03.126773    3808 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 03:11:03.126853    3808 machine.go:91] provisioned docker machine in 38.1012493s
	I0229 03:11:03.126853    3808 start.go:300] post-start starting for "running-upgrade-537900" (driver="hyperv")
	I0229 03:11:03.126853    3808 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 03:11:03.136366    3808 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 03:11:03.136366    3808 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-537900 ).state
	I0229 03:11:05.255326    3808 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 03:11:05.255326    3808 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:11:05.255326    3808 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-537900 ).networkadapters[0]).ipaddresses[0]
	I0229 03:11:07.835377    3808 main.go:141] libmachine: [stdout =====>] : 172.19.10.7
	
	I0229 03:11:07.835460    3808 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:11:07.835460    3808 sshutil.go:53] new ssh client: &{IP:172.19.10.7 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\running-upgrade-537900\id_rsa Username:docker}
	I0229 03:11:07.957016    3808 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8203824s)
	I0229 03:11:07.967130    3808 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 03:11:07.975147    3808 info.go:137] Remote host: Buildroot 2021.02.12
	I0229 03:11:07.975285    3808 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0229 03:11:07.975543    3808 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0229 03:11:07.976260    3808 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem -> 33122.pem in /etc/ssl/certs
	I0229 03:11:07.992967    3808 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 03:11:08.015709    3808 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem --> /etc/ssl/certs/33122.pem (1708 bytes)
	I0229 03:11:08.061783    3808 start.go:303] post-start completed in 4.9346565s
	I0229 03:11:08.061783    3808 fix.go:56] fixHost completed within 45.1174054s
	I0229 03:11:08.061783    3808 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-537900 ).state
	I0229 03:11:10.407635    3808 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 03:11:10.407670    3808 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:11:10.407670    3808 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-537900 ).networkadapters[0]).ipaddresses[0]
	I0229 03:11:13.321568    3808 main.go:141] libmachine: [stdout =====>] : 172.19.10.7
	
	I0229 03:11:13.321635    3808 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:11:13.328501    3808 main.go:141] libmachine: Using SSH client type: native
	I0229 03:11:13.328919    3808 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.10.7 22 <nil> <nil>}
	I0229 03:11:13.329027    3808 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0229 03:11:13.488501    3808 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709176273.659073083
	
	I0229 03:11:13.488501    3808 fix.go:206] guest clock: 1709176273.659073083
	I0229 03:11:13.488501    3808 fix.go:219] Guest: 2024-02-29 03:11:13.659073083 +0000 UTC Remote: 2024-02-29 03:11:08.0617839 +0000 UTC m=+360.460581301 (delta=5.597289183s)
	I0229 03:11:13.489032    3808 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-537900 ).state
	I0229 03:11:15.659624    3808 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 03:11:15.659666    3808 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:11:15.659666    3808 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-537900 ).networkadapters[0]).ipaddresses[0]
	I0229 03:11:18.183953    3808 main.go:141] libmachine: [stdout =====>] : 172.19.10.7
	
	I0229 03:11:18.184327    3808 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:11:18.190491    3808 main.go:141] libmachine: Using SSH client type: native
	I0229 03:11:18.191153    3808 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.10.7 22 <nil> <nil>}
	I0229 03:11:18.191153    3808 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709176273
	I0229 03:11:18.359213    3808 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Feb 29 03:11:13 UTC 2024
	
	I0229 03:11:18.359213    3808 fix.go:226] clock set: Thu Feb 29 03:11:13 UTC 2024
	 (err=<nil>)
	I0229 03:11:18.359213    3808 start.go:83] releasing machines lock for "running-upgrade-537900", held for 55.4142629s
	I0229 03:11:18.359888    3808 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-537900 ).state
	I0229 03:11:20.496590    3808 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 03:11:20.496735    3808 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:11:20.496735    3808 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-537900 ).networkadapters[0]).ipaddresses[0]
	I0229 03:11:23.071256    3808 main.go:141] libmachine: [stdout =====>] : 172.19.10.7
	
	I0229 03:11:23.071256    3808 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:11:23.075096    3808 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 03:11:23.075096    3808 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-537900 ).state
	I0229 03:11:23.087695    3808 ssh_runner.go:195] Run: cat /version.json
	I0229 03:11:23.088291    3808 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-537900 ).state
	I0229 03:11:25.298169    3808 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 03:11:25.298169    3808 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:11:25.298399    3808 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-537900 ).networkadapters[0]).ipaddresses[0]
	I0229 03:11:25.341672    3808 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 03:11:25.341897    3808 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:11:25.341954    3808 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-537900 ).networkadapters[0]).ipaddresses[0]
	I0229 03:11:27.953385    3808 main.go:141] libmachine: [stdout =====>] : 172.19.10.7
	
	I0229 03:11:27.953385    3808 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:11:27.953385    3808 sshutil.go:53] new ssh client: &{IP:172.19.10.7 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\running-upgrade-537900\id_rsa Username:docker}
	I0229 03:11:27.992549    3808 main.go:141] libmachine: [stdout =====>] : 172.19.10.7
	
	I0229 03:11:27.992549    3808 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:11:27.992549    3808 sshutil.go:53] new ssh client: &{IP:172.19.10.7 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\running-upgrade-537900\id_rsa Username:docker}
	I0229 03:11:38.166048    3808 ssh_runner.go:235] Completed: cat /version.json: (15.0769191s)
	I0229 03:11:38.166593    3808 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (15.0901129s)
	W0229 03:11:38.166653    3808 start.go:843] [curl -sS -m 2 https://registry.k8s.io/] failed: curl -sS -m 2 https://registry.k8s.io/: Process exited with status 28
	stdout:
	
	stderr:
	curl: (28) Resolving timed out after 2000 milliseconds
	W0229 03:11:38.166653    3808 start.go:420] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	W0229 03:11:38.166721    3808 out.go:239] ! This VM is having trouble accessing https://registry.k8s.io
	! This VM is having trouble accessing https://registry.k8s.io
	W0229 03:11:38.166786    3808 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0229 03:11:38.183830    3808 ssh_runner.go:195] Run: systemctl --version
	I0229 03:11:38.201651    3808 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 03:11:38.209980    3808 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 03:11:38.219517    3808 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0229 03:11:38.244981    3808 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0229 03:11:38.276196    3808 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 03:11:38.276196    3808 start.go:475] detecting cgroup driver to use...
	I0229 03:11:38.276196    3808 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 03:11:38.326670    3808 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0229 03:11:38.354450    3808 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 03:11:38.372634    3808 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 03:11:38.381628    3808 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 03:11:38.408094    3808 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 03:11:38.437102    3808 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 03:11:38.467313    3808 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 03:11:38.498696    3808 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 03:11:38.527777    3808 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 03:11:38.555795    3808 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 03:11:38.587859    3808 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 03:11:38.616684    3808 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 03:11:38.909582    3808 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 03:11:38.941280    3808 start.go:475] detecting cgroup driver to use...
	I0229 03:11:38.950290    3808 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 03:11:38.982027    3808 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 03:11:39.015690    3808 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 03:11:39.073838    3808 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 03:11:39.106442    3808 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 03:11:39.129909    3808 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 03:11:39.168810    3808 ssh_runner.go:195] Run: which cri-dockerd
	I0229 03:11:39.184250    3808 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 03:11:39.201380    3808 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0229 03:11:39.244233    3808 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 03:11:39.544919    3808 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 03:11:39.830100    3808 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 03:11:39.830100    3808 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 03:11:39.865700    3808 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 03:11:40.129913    3808 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 03:12:51.406712    3808 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.272398s)
	I0229 03:12:51.419797    3808 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0229 03:12:51.506704    3808 out.go:177] 
	W0229 03:12:51.507991    3808 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Thu 2024-02-29 03:02:58 UTC, ends at Thu 2024-02-29 03:12:51 UTC. --
	Feb 29 03:03:48 running-upgrade-537900 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.432532450Z" level=info msg="Starting up"
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.435372915Z" level=info msg="libcontainerd: started new containerd process" pid=680
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.435448841Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.435460645Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.435491055Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.435512563Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48Z" level=warning msg="deprecated version : `1`, please switch to version `2`"
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.476465382Z" level=info msg="starting containerd" revision=212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 version=v1.6.4
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.501809396Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.502197528Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.504743193Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.504853230Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.505564472Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.505682612Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.505708021Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.505721125Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.505825861Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.506243803Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.506577616Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.506691755Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.506767181Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.506862813Z" level=info msg="metadata content store policy set" policy=shared
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.517090089Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.517131604Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.517157312Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.517215432Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.517234338Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.517316967Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.517425203Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.517449111Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.517467518Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.517483823Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.517499228Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.517515834Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.517666385Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.517956184Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.518730547Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.518849187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.518894403Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.518990335Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.519224915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.519249523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.519264428Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.519306543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.519328250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.519342655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.519356260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.519373466Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.519444790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.519552126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.519570833Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.519584537Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.519601543Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.519614948Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.519633954Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin"
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.519954063Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.520246262Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.520544564Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.520572173Z" level=info msg="containerd successfully booted in 0.048229s"
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.529750192Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.529882337Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.529906646Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.529918450Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.531845505Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.531953041Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.531974148Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.531984752Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.552917366Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.552973385Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.552982589Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.552989491Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.552996493Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.553003196Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.553484859Z" level=info msg="Loading containers: start."
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.675465518Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.753020378Z" level=info msg="Loading containers: done."
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.770907457Z" level=info msg="Docker daemon" commit=f756502 graphdriver(s)=overlay2 version=20.10.16
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.771029299Z" level=info msg="Daemon has completed initialization"
	Feb 29 03:03:48 running-upgrade-537900 systemd[1]: Started Docker Application Container Engine.
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.812772686Z" level=info msg="API listen on [::]:2376"
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.830697078Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 29 03:04:29 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:04:29.052516661Z" level=info msg="Processing signal 'terminated'"
	Feb 29 03:04:29 running-upgrade-537900 systemd[1]: Stopping Docker Application Container Engine...
	Feb 29 03:04:29 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:04:29.053553105Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Feb 29 03:04:29 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:04:29.053993124Z" level=info msg="Daemon shutdown complete"
	Feb 29 03:04:29 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:04:29.054050726Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Feb 29 03:04:29 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:04:29.054076927Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Feb 29 03:04:30 running-upgrade-537900 systemd[1]: docker.service: Succeeded.
	Feb 29 03:04:30 running-upgrade-537900 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 03:04:30 running-upgrade-537900 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 03:04:30 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:30.128057091Z" level=info msg="Starting up"
	Feb 29 03:04:30 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:30.130485094Z" level=info msg="libcontainerd: started new containerd process" pid=945
	Feb 29 03:04:30 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:30.130677402Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Feb 29 03:04:30 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:30.130729205Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Feb 29 03:04:30 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:30.130780707Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Feb 29 03:04:30 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:30.130828209Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30Z" level=warning msg="deprecated version : `1`, please switch to version `2`"
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.167743271Z" level=info msg="starting containerd" revision=212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 version=v1.6.4
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.188724360Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.188870766Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.191696485Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.191815190Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.192239408Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.192340213Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.192362714Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.192376614Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.192407616Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.192730329Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.193396457Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.193434659Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.193459560Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.193471661Z" level=info msg="metadata content store policy set" policy=shared
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.193605666Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.193627967Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.193642468Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.193671469Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.193689470Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.193705770Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.193729071Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.193747572Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.193764073Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.193780274Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.193795974Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.193810575Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.193946281Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.194235393Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.195032127Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.195153632Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.195175433Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.195255836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.195296238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.195313639Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.195328139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.195345840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.195360741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.195373841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.195387942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.195409743Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.195453944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.195469245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.195485846Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.195502247Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.195519247Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.195531948Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.195554549Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin"
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.195823060Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.195961566Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.196024369Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.196055270Z" level=info msg="containerd successfully booted in 0.029249s"
	Feb 29 03:04:30 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:30.208789309Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Feb 29 03:04:30 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:30.208826711Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Feb 29 03:04:30 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:30.208846411Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Feb 29 03:04:30 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:30.208856312Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Feb 29 03:04:30 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:30.210872497Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Feb 29 03:04:30 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:30.210909099Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Feb 29 03:04:30 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:30.210927100Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Feb 29 03:04:30 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:30.210938400Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Feb 29 03:04:31 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:31.755904002Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Feb 29 03:04:31 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:31.756017307Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Feb 29 03:04:31 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:31.756029407Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	Feb 29 03:04:31 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:31.756035908Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	Feb 29 03:04:31 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:31.756042608Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	Feb 29 03:04:31 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:31.756048908Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	Feb 29 03:04:31 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:31.756299119Z" level=info msg="Loading containers: start."
	Feb 29 03:04:31 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:31.908165748Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 29 03:04:31 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:31.971491028Z" level=info msg="Loading containers: done."
	Feb 29 03:04:31 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:31.991348169Z" level=info msg="Docker daemon" commit=f756502 graphdriver(s)=overlay2 version=20.10.16
	Feb 29 03:04:31 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:31.991547377Z" level=info msg="Daemon has completed initialization"
	Feb 29 03:04:32 running-upgrade-537900 systemd[1]: Started Docker Application Container Engine.
	Feb 29 03:04:32 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:32.030207114Z" level=info msg="API listen on [::]:2376"
	Feb 29 03:04:32 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:32.038173251Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 29 03:04:32 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:32.764481998Z" level=info msg="Processing signal 'terminated'"
	Feb 29 03:04:32 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:32.765818354Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Feb 29 03:04:32 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:32.766189870Z" level=info msg="Daemon shutdown complete"
	Feb 29 03:04:32 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:32.766210171Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Feb 29 03:04:32 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:32.766215271Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Feb 29 03:04:32 running-upgrade-537900 systemd[1]: Stopping Docker Application Container Engine...
	Feb 29 03:04:33 running-upgrade-537900 systemd[1]: docker.service: Succeeded.
	Feb 29 03:04:33 running-upgrade-537900 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 03:04:33 running-upgrade-537900 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:33.827021277Z" level=info msg="Starting up"
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:33.830460323Z" level=info msg="libcontainerd: started new containerd process" pid=1125
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:33.830657131Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:33.830716934Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:33.830779237Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:33.830825639Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33Z" level=warning msg="deprecated version : `1`, please switch to version `2`"
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.867013670Z" level=info msg="starting containerd" revision=212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 version=v1.6.4
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.888628385Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.888774392Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.891255097Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.891378302Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.891658214Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.891812920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.891834421Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.891850522Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.891880723Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.892089632Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.892362243Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.892463848Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.892488549Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.892500949Z" level=info msg="metadata content store policy set" policy=shared
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.892641555Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.892752560Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.892771861Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.892799162Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.892815463Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.892832863Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.892848364Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.892863565Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.892879265Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.892894466Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.892964369Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.892982070Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.893047272Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.893174078Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.893690300Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.893810205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.893830206Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.893880308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.893984212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.894003213Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.894018014Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.894032314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.894046215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.894064216Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.894078116Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.894095117Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.894135619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.894151819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.894165520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.894179320Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.894196221Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.894209622Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.894231123Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin"
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.894441632Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.895150562Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.895214964Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.895232265Z" level=info msg="containerd successfully booted in 0.030064s"
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:33.911056935Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:33.911174840Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:33.911197741Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:33.911207841Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:33.913324831Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:33.913359932Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:33.913376333Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:33.913387334Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:33.926878805Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:33.926963808Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:33.927010410Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:33.927046112Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:33.927083213Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:33.927117015Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:33.927652137Z" level=info msg="Loading containers: start."
	Feb 29 03:04:34 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:34.080519109Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 29 03:04:34 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:34.153555200Z" level=info msg="Loading containers: done."
	Feb 29 03:04:34 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:34.173228933Z" level=info msg="Docker daemon" commit=f756502 graphdriver(s)=overlay2 version=20.10.16
	Feb 29 03:04:34 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:34.173295636Z" level=info msg="Daemon has completed initialization"
	Feb 29 03:04:34 running-upgrade-537900 systemd[1]: Started Docker Application Container Engine.
	Feb 29 03:04:34 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:34.202551175Z" level=info msg="API listen on [::]:2376"
	Feb 29 03:04:34 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:34.220482134Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 29 03:04:43 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:43.155910453Z" level=error msg="(*service).Write failed" error="rpc error: code = Unavailable desc = ref moby/1/index-sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db locked for 336.871821ms (since 2024-02-29 03:04:42.632941708 +0000 UTC m=+8.789125921): unavailable" expected="sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db" ref="index-sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db" total=3829
	Feb 29 03:04:43 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:43.955357969Z" level=error msg="(*service).Write failed" error="rpc error: code = Unavailable desc = ref moby/1/index-sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db locked for 818.969628ms (since 2024-02-29 03:04:42.632941708 +0000 UTC m=+8.789125921): unavailable" expected="sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db" ref="index-sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db" total=3829
	Feb 29 03:04:44 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:44.384609868Z" level=error msg="(*service).Write failed" error="rpc error: code = Unavailable desc = ref moby/1/index-sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db locked for 1.578454486s (since 2024-02-29 03:04:42.632941708 +0000 UTC m=+8.789125921): unavailable" expected="sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db" ref="index-sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db" total=3829
	Feb 29 03:04:44 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:44.679496269Z" level=error msg="(*service).Write failed" error="rpc error: code = Unavailable desc = ref moby/1/manifest-sha256:c2280d2f5f56cf9c9a01bb64b2db4651e35efd6d62a54dcfc12049fe6449c5e4 locked for 179.83842ms (since 2024-02-29 03:04:44.393444696 +0000 UTC m=+10.549628909): unavailable" expected="sha256:c2280d2f5f56cf9c9a01bb64b2db4651e35efd6d62a54dcfc12049fe6449c5e4" ref="manifest-sha256:c2280d2f5f56cf9c9a01bb64b2db4651e35efd6d62a54dcfc12049fe6449c5e4" total=526
	Feb 29 03:04:45 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:45.144278313Z" level=error msg="(*service).Write failed" error="rpc error: code = Unavailable desc = ref moby/1/manifest-sha256:c2280d2f5f56cf9c9a01bb64b2db4651e35efd6d62a54dcfc12049fe6449c5e4 locked for 699.159789ms (since 2024-02-29 03:04:44.393444696 +0000 UTC m=+10.549628909): unavailable" expected="sha256:c2280d2f5f56cf9c9a01bb64b2db4651e35efd6d62a54dcfc12049fe6449c5e4" ref="manifest-sha256:c2280d2f5f56cf9c9a01bb64b2db4651e35efd6d62a54dcfc12049fe6449c5e4" total=526
	Feb 29 03:04:45 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:45.694811003Z" level=error msg="(*service).Write failed" error="rpc error: code = Unavailable desc = ref moby/1/manifest-sha256:c2280d2f5f56cf9c9a01bb64b2db4651e35efd6d62a54dcfc12049fe6449c5e4 locked for 1.09765383s (since 2024-02-29 03:04:44.393444696 +0000 UTC m=+10.549628909): unavailable" expected="sha256:c2280d2f5f56cf9c9a01bb64b2db4651e35efd6d62a54dcfc12049fe6449c5e4" ref="manifest-sha256:c2280d2f5f56cf9c9a01bb64b2db4651e35efd6d62a54dcfc12049fe6449c5e4" total=526
	Feb 29 03:04:46 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:46.000370618Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:04:46 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:46.000438724Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:04:46 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:46.000457225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:04:46 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:46.002454980Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/773f96f2c596239afc3b8af80ba08f2a5803da84e416253cc3c4dab992e38975 pid=1741 runtime=io.containerd.runc.v2
	Feb 29 03:04:46 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:46.059917837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:04:46 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:46.060047147Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:04:46 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:46.060080849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:04:46 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:46.060339969Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/cb750ae096b167ce6fee9cf173cb81570f5537bf2ce1871dc4ace5666be9b8d1 pid=1768 runtime=io.containerd.runc.v2
	Feb 29 03:04:46 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:46.077929234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:04:46 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:46.078043142Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:04:46 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:46.078093546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:04:46 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:46.079009117Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/ceec2146a01886ad08a8bec23f70c867db9126e792739af78a14521271551204 pid=1795 runtime=io.containerd.runc.v2
	Feb 29 03:04:46 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:46.083947700Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:04:46 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:46.084020106Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:04:46 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:46.084033007Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:04:46 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:46.084302728Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/5102125c0aabb2aa4110dbb4a8b04ee5c20c5fea40cc0e07d22b3827ef618453 pid=1801 runtime=io.containerd.runc.v2
	Feb 29 03:04:46 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:46.960334669Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:04:46 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:46.960546085Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:04:46 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:46.960650493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:04:46 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:46.960882011Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/422f92f237f7d7afcf10ac08338b70aa3a1cf9efdf0f8601bd50c16d38fa4828 pid=1898 runtime=io.containerd.runc.v2
	Feb 29 03:04:47 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:47.253898679Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:04:47 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:47.254053890Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:04:47 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:47.254128096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:04:47 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:47.254765844Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/5293717322e32a9e61bbd336b4753c6a4280ba9cef910b4d3056cf40eb5c0979 pid=1944 runtime=io.containerd.runc.v2
	Feb 29 03:04:47 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:47.340720621Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:04:47 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:47.340858531Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:04:47 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:47.340891134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:04:47 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:47.341106450Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/92dac45ab5c2f109824c750eafd3c495cf506b504a69837317d7d46961d64206 pid=1970 runtime=io.containerd.runc.v2
	Feb 29 03:04:47 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:47.554239810Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:04:47 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:47.554861757Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:04:47 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:47.554919362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:04:47 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:47.555141278Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/a2d0c35d6a25f91ded896db2ee51950e9a6f267b1468b1c12a2c82db98159256 pid=2016 runtime=io.containerd.runc.v2
	Feb 29 03:05:10 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:10.432638390Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:05:10 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:10.432717294Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:05:10 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:10.432730595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:05:10 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:10.440076861Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/c42db7c66d49cef965a9fd02cdbd92342aa3c79e241a1338e1af1effcf3128de pid=2456 runtime=io.containerd.runc.v2
	Feb 29 03:05:10 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:10.489270211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:05:10 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:10.489427619Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:05:10 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:10.489507623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:05:10 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:10.489755936Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/5de768bd612e0ab6153d234026f2084b2df337978efebf564440d5105d66d6b8 pid=2486 runtime=io.containerd.runc.v2
	Feb 29 03:05:10 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:10.772906841Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:05:10 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:10.773089050Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:05:10 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:10.773277660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:05:10 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:10.774042198Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/545389747d412f0742c43b7fcf75b6eb1dc396fc89c10008816439edf7093499 pid=2535 runtime=io.containerd.runc.v2
	Feb 29 03:05:11 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:11.220725347Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:05:11 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:11.220871454Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:05:11 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:11.220906456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:05:11 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:11.221338977Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/73cf4890d9c30179ad7024211fe1d9c04a0818069a177fd99d7dcefb26ddbb18 pid=2580 runtime=io.containerd.runc.v2
	Feb 29 03:05:11 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:11.834103117Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:05:11 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:11.834208822Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:05:11 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:11.834327828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:05:11 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:11.834636543Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/c0391089a5369d3560f38de3fb981bf4902edace5ec511aaf9e7e4d5f2220fbd pid=2643 runtime=io.containerd.runc.v2
	Feb 29 03:05:11 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:11.844667238Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:05:11 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:11.845077258Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:05:11 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:11.845250467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:05:11 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:11.848915047Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/b3dda49d7ad2e826f09fa11996880d8d59e9bd993e89fd85743b27f6452efb21 pid=2649 runtime=io.containerd.runc.v2
	Feb 29 03:05:42 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:05:42.373981748Z" level=info msg="ignoring event" container=b3dda49d7ad2e826f09fa11996880d8d59e9bd993e89fd85743b27f6452efb21 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:05:42 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:42.375060695Z" level=info msg="shim disconnected" id=b3dda49d7ad2e826f09fa11996880d8d59e9bd993e89fd85743b27f6452efb21
	Feb 29 03:05:42 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:42.375224302Z" level=warning msg="cleaning up after shim disconnected" id=b3dda49d7ad2e826f09fa11996880d8d59e9bd993e89fd85743b27f6452efb21 namespace=moby
	Feb 29 03:05:42 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:42.375241003Z" level=info msg="cleaning up dead shim"
	Feb 29 03:05:42 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:42.392420046Z" level=warning msg="cleanup warnings time=\"2024-02-29T03:05:42Z\" level=info msg=\"starting signal loop\" namespace=moby pid=3000 runtime=io.containerd.runc.v2\n"
	Feb 29 03:05:43 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:43.058370866Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:05:43 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:43.058510472Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:05:43 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:43.058525873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:05:43 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:43.059099097Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/7dcafbe93c312107c70523f6252f495dd771ede92533ff03a100d8cc9de33c8e pid=3020 runtime=io.containerd.runc.v2
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:40.323610529Z" level=info msg="Processing signal 'terminated'"
	Feb 29 03:11:40 running-upgrade-537900 systemd[1]: Stopping Docker Application Container Engine...
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:40.648543251Z" level=info msg="ignoring event" container=5102125c0aabb2aa4110dbb4a8b04ee5c20c5fea40cc0e07d22b3827ef618453 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.649748408Z" level=info msg="shim disconnected" id=5102125c0aabb2aa4110dbb4a8b04ee5c20c5fea40cc0e07d22b3827ef618453
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.650206529Z" level=warning msg="cleaning up after shim disconnected" id=5102125c0aabb2aa4110dbb4a8b04ee5c20c5fea40cc0e07d22b3827ef618453 namespace=moby
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.650269032Z" level=info msg="cleaning up dead shim"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:40.664476698Z" level=info msg="ignoring event" container=cb750ae096b167ce6fee9cf173cb81570f5537bf2ce1871dc4ace5666be9b8d1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.664896317Z" level=info msg="shim disconnected" id=cb750ae096b167ce6fee9cf173cb81570f5537bf2ce1871dc4ace5666be9b8d1
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.665412641Z" level=warning msg="cleaning up after shim disconnected" id=cb750ae096b167ce6fee9cf173cb81570f5537bf2ce1871dc4ace5666be9b8d1 namespace=moby
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.665528847Z" level=info msg="cleaning up dead shim"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:40.688381817Z" level=info msg="ignoring event" container=c42db7c66d49cef965a9fd02cdbd92342aa3c79e241a1338e1af1effcf3128de module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:40.688932443Z" level=info msg="ignoring event" container=92dac45ab5c2f109824c750eafd3c495cf506b504a69837317d7d46961d64206 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.689388465Z" level=info msg="shim disconnected" id=73cf4890d9c30179ad7024211fe1d9c04a0818069a177fd99d7dcefb26ddbb18
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.690071997Z" level=warning msg="cleaning up after shim disconnected" id=73cf4890d9c30179ad7024211fe1d9c04a0818069a177fd99d7dcefb26ddbb18 namespace=moby
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:40.690196002Z" level=info msg="ignoring event" container=73cf4890d9c30179ad7024211fe1d9c04a0818069a177fd99d7dcefb26ddbb18 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.690364310Z" level=info msg="cleaning up dead shim"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.689585574Z" level=info msg="shim disconnected" id=c42db7c66d49cef965a9fd02cdbd92342aa3c79e241a1338e1af1effcf3128de
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.691356657Z" level=warning msg="cleaning up after shim disconnected" id=c42db7c66d49cef965a9fd02cdbd92342aa3c79e241a1338e1af1effcf3128de namespace=moby
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.691463162Z" level=info msg="cleaning up dead shim"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.689643777Z" level=info msg="shim disconnected" id=92dac45ab5c2f109824c750eafd3c495cf506b504a69837317d7d46961d64206
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.691940384Z" level=warning msg="cleaning up after shim disconnected" id=92dac45ab5c2f109824c750eafd3c495cf506b504a69837317d7d46961d64206 namespace=moby
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.692045789Z" level=info msg="cleaning up dead shim"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.718256317Z" level=warning msg="cleanup warnings time=\"2024-02-29T03:11:40Z\" level=info msg=\"starting signal loop\" namespace=moby pid=6066 runtime=io.containerd.runc.v2\n"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.721344262Z" level=info msg="shim disconnected" id=545389747d412f0742c43b7fcf75b6eb1dc396fc89c10008816439edf7093499
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.721386264Z" level=warning msg="cleaning up after shim disconnected" id=545389747d412f0742c43b7fcf75b6eb1dc396fc89c10008816439edf7093499 namespace=moby
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.721397164Z" level=info msg="cleaning up dead shim"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:40.721813284Z" level=info msg="ignoring event" container=773f96f2c596239afc3b8af80ba08f2a5803da84e416253cc3c4dab992e38975 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.721948990Z" level=info msg="shim disconnected" id=773f96f2c596239afc3b8af80ba08f2a5803da84e416253cc3c4dab992e38975
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.722057295Z" level=warning msg="cleaning up after shim disconnected" id=773f96f2c596239afc3b8af80ba08f2a5803da84e416253cc3c4dab992e38975 namespace=moby
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.722085496Z" level=info msg="cleaning up dead shim"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:40.723517363Z" level=info msg="ignoring event" container=545389747d412f0742c43b7fcf75b6eb1dc396fc89c10008816439edf7093499 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:40.759466147Z" level=info msg="ignoring event" container=7dcafbe93c312107c70523f6252f495dd771ede92533ff03a100d8cc9de33c8e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.764466882Z" level=info msg="shim disconnected" id=7dcafbe93c312107c70523f6252f495dd771ede92533ff03a100d8cc9de33c8e
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.764540885Z" level=warning msg="cleaning up after shim disconnected" id=7dcafbe93c312107c70523f6252f495dd771ede92533ff03a100d8cc9de33c8e namespace=moby
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.764553086Z" level=info msg="cleaning up dead shim"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.784413316Z" level=info msg="shim disconnected" id=5293717322e32a9e61bbd336b4753c6a4280ba9cef910b4d3056cf40eb5c0979
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:40.785075247Z" level=info msg="ignoring event" container=ceec2146a01886ad08a8bec23f70c867db9126e792739af78a14521271551204 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:40.785126650Z" level=info msg="ignoring event" container=5293717322e32a9e61bbd336b4753c6a4280ba9cef910b4d3056cf40eb5c0979 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.785547869Z" level=warning msg="cleaning up after shim disconnected" id=5293717322e32a9e61bbd336b4753c6a4280ba9cef910b4d3056cf40eb5c0979 namespace=moby
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.785683876Z" level=info msg="cleaning up dead shim"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.815730683Z" level=info msg="shim disconnected" id=ceec2146a01886ad08a8bec23f70c867db9126e792739af78a14521271551204
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.815924792Z" level=warning msg="cleaning up after shim disconnected" id=ceec2146a01886ad08a8bec23f70c867db9126e792739af78a14521271551204 namespace=moby
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.816743931Z" level=info msg="cleaning up dead shim"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.819412856Z" level=warning msg="cleanup warnings time=\"2024-02-29T03:11:40Z\" level=info msg=\"starting signal loop\" namespace=moby pid=6049 runtime=io.containerd.runc.v2\n"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.820059286Z" level=warning msg="cleanup warnings time=\"2024-02-29T03:11:40Z\" level=info msg=\"starting signal loop\" namespace=moby pid=6113 runtime=io.containerd.runc.v2\n"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.820472805Z" level=warning msg="cleanup warnings time=\"2024-02-29T03:11:40Z\" level=info msg=\"starting signal loop\" namespace=moby pid=6085 runtime=io.containerd.runc.v2\n"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:40.821419750Z" level=info msg="ignoring event" container=5de768bd612e0ab6153d234026f2084b2df337978efebf564440d5105d66d6b8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.833228203Z" level=info msg="shim disconnected" id=5de768bd612e0ab6153d234026f2084b2df337978efebf564440d5105d66d6b8
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.833418312Z" level=warning msg="cleaning up after shim disconnected" id=5de768bd612e0ab6153d234026f2084b2df337978efebf564440d5105d66d6b8 namespace=moby
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.833516616Z" level=info msg="cleaning up dead shim"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.847650479Z" level=warning msg="cleanup warnings time=\"2024-02-29T03:11:40Z\" level=info msg=\"starting signal loop\" namespace=moby pid=6107 runtime=io.containerd.runc.v2\n"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.867504309Z" level=warning msg="cleanup warnings time=\"2024-02-29T03:11:40Z\" level=info msg=\"starting signal loop\" namespace=moby pid=6095 runtime=io.containerd.runc.v2\n"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.872211429Z" level=warning msg="cleanup warnings time=\"2024-02-29T03:11:40Z\" level=info msg=\"starting signal loop\" namespace=moby pid=6124 runtime=io.containerd.runc.v2\n"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.907481581Z" level=warning msg="cleanup warnings time=\"2024-02-29T03:11:40Z\" level=info msg=\"starting signal loop\" namespace=moby pid=6139 runtime=io.containerd.runc.v2\n"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.951194629Z" level=warning msg="cleanup warnings time=\"2024-02-29T03:11:40Z\" level=info msg=\"starting signal loop\" namespace=moby pid=6159 runtime=io.containerd.runc.v2\n"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.980023880Z" level=warning msg="cleanup warnings time=\"2024-02-29T03:11:40Z\" level=info msg=\"starting signal loop\" namespace=moby pid=6202 runtime=io.containerd.runc.v2\n"
	Feb 29 03:11:41 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:41.031655098Z" level=warning msg="cleanup warnings time=\"2024-02-29T03:11:40Z\" level=info msg=\"starting signal loop\" namespace=moby pid=6190 runtime=io.containerd.runc.v2\ntime=\"2024-02-29T03:11:41Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n"
	Feb 29 03:11:41 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:41.247107592Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:11:41 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:41.247193996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:11:41 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:41.247208596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:11:41 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:41.247106792Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:11:41 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:41.247382304Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:11:41 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:41.247415606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:11:41 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:41.247697719Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/e165fece89825bcd351eeff1c5f15fa179878aec4c8ad74404c2cf6dbdd932d1 pid=6253 runtime=io.containerd.runc.v2
	Feb 29 03:11:41 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:41.248113339Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/d313a243e210fdabe07a20c7ac394065898c69d451eddba99d89e1a44de81677 pid=6250 runtime=io.containerd.runc.v2
	Feb 29 03:11:41 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:41.512266413Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:11:41 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:41.512625630Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:11:41 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:41.512753736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:11:41 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:41.513358464Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/0077f00dcb9548c5f0ab56cc1bd6f826e5c2450c861ba2b3bb6ed3697015254f pid=6328 runtime=io.containerd.runc.v2
	Feb 29 03:11:41 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:41.879128899Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:11:41 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:41.879212803Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:11:41 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:41.879226604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:11:41 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:41.880126146Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/f56bf3f32279114e10e226efa0f4177a335f83a05142f8f0fd5e76412dd963ab pid=6389 runtime=io.containerd.runc.v2
	Feb 29 03:11:42 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:42.246737320Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:11:42 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:42.247116538Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:11:42 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:42.247409052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:11:42 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:42.247745568Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/587325f9692391e02fb1a5419728434cf3e2ea91bebdf36177abf66e448b7f4c pid=6432 runtime=io.containerd.runc.v2
	Feb 29 03:11:42 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:42.265388194Z" level=info msg="shim disconnected" id=a2d0c35d6a25f91ded896db2ee51950e9a6f267b1468b1c12a2c82db98159256
	Feb 29 03:11:42 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:42.265729210Z" level=warning msg="cleaning up after shim disconnected" id=a2d0c35d6a25f91ded896db2ee51950e9a6f267b1468b1c12a2c82db98159256 namespace=moby
	Feb 29 03:11:42 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:42.265963021Z" level=info msg="cleaning up dead shim"
	Feb 29 03:11:42 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:42.267147276Z" level=info msg="ignoring event" container=a2d0c35d6a25f91ded896db2ee51950e9a6f267b1468b1c12a2c82db98159256 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:11:42 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:42.295200791Z" level=warning msg="cleanup warnings time=\"2024-02-29T03:11:42Z\" level=info msg=\"starting signal loop\" namespace=moby pid=6448 runtime=io.containerd.runc.v2\n"
	Feb 29 03:11:42 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:42.501506055Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:11:42 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:42.501562658Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:11:42 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:42.501575759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:11:42 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:42.501733366Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/9d9279755591c2ba85c71ff575d9496bc06790ed444879144b0438ce9424fcf8 pid=6496 runtime=io.containerd.runc.v2
	Feb 29 03:11:43 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:43.362025567Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:11:43 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:43.362104771Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:11:43 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:43.362118872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:11:43 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:43.362272179Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/a46421a03b8957a8c26e3459810f1277ee95ae1d970a6740270ee52807180780 pid=6540 runtime=io.containerd.runc.v2
	Feb 29 03:11:43 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:43.803769461Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:11:43 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:43.803919568Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:11:43 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:43.803958370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:11:43 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:43.804210982Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/d4267c7991613d41dfb3745520f7f25e2abc970a5a7ce256d137fa19372f70c6 pid=6579 runtime=io.containerd.runc.v2
	Feb 29 03:11:45 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:45.337663718Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:11:45 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:45.338378752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:11:45 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:45.338774470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:11:45 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:45.339770917Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/0668669812f03181e4cd95cbe256cd6f42a4542d45070354aefbfa8a34ee094d pid=6639 runtime=io.containerd.runc.v2
	Feb 29 03:11:45 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:45.626930869Z" level=info msg="ignoring event" container=c0391089a5369d3560f38de3fb981bf4902edace5ec511aaf9e7e4d5f2220fbd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:11:45 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:45.627526397Z" level=info msg="shim disconnected" id=c0391089a5369d3560f38de3fb981bf4902edace5ec511aaf9e7e4d5f2220fbd
	Feb 29 03:11:45 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:45.627596100Z" level=warning msg="cleaning up after shim disconnected" id=c0391089a5369d3560f38de3fb981bf4902edace5ec511aaf9e7e4d5f2220fbd namespace=moby
	Feb 29 03:11:45 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:45.627609101Z" level=info msg="cleaning up dead shim"
	Feb 29 03:11:45 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:45.643970468Z" level=warning msg="cleanup warnings time=\"2024-02-29T03:11:45Z\" level=info msg=\"starting signal loop\" namespace=moby pid=6675 runtime=io.containerd.runc.v2\n"
	Feb 29 03:11:45 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:45.805140618Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:11:45 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:45.805287025Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:11:45 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:45.805351828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:11:45 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:45.805705044Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/608f97f2a83aa2d7ba3be31b0e5eef6abacf435bdade0f9e96650ee184792ad8 pid=6702 runtime=io.containerd.runc.v2
	Feb 29 03:11:46 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:46.739423385Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:11:46 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:46.739516590Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:11:46 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:46.739530290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:11:46 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:46.740516536Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/6d629a0cc6494682cdd14f644ba1747b9f47af78dc599ab243734757742b7276 pid=6763 runtime=io.containerd.runc.v2
	Feb 29 03:11:47 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:47.114559459Z" level=info msg="ignoring event" container=f56bf3f32279114e10e226efa0f4177a335f83a05142f8f0fd5e76412dd963ab module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:11:47 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:47.115542705Z" level=info msg="shim disconnected" id=f56bf3f32279114e10e226efa0f4177a335f83a05142f8f0fd5e76412dd963ab
	Feb 29 03:11:47 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:47.115969125Z" level=warning msg="cleaning up after shim disconnected" id=f56bf3f32279114e10e226efa0f4177a335f83a05142f8f0fd5e76412dd963ab namespace=moby
	Feb 29 03:11:47 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:47.116114232Z" level=info msg="cleaning up dead shim"
	Feb 29 03:11:47 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:47.133060226Z" level=warning msg="cleanup warnings time=\"2024-02-29T03:11:47Z\" level=info msg=\"starting signal loop\" namespace=moby pid=6816 runtime=io.containerd.runc.v2\n"
	Feb 29 03:11:50 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:50.432011169Z" level=info msg="shim disconnected" id=422f92f237f7d7afcf10ac08338b70aa3a1cf9efdf0f8601bd50c16d38fa4828
	Feb 29 03:11:50 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:50.432076672Z" level=warning msg="cleaning up after shim disconnected" id=422f92f237f7d7afcf10ac08338b70aa3a1cf9efdf0f8601bd50c16d38fa4828 namespace=moby
	Feb 29 03:11:50 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:50.432088772Z" level=info msg="cleaning up dead shim"
	Feb 29 03:11:50 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:50.432877409Z" level=info msg="ignoring event" container=422f92f237f7d7afcf10ac08338b70aa3a1cf9efdf0f8601bd50c16d38fa4828 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:11:50 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:50.448549443Z" level=warning msg="cleanup warnings time=\"2024-02-29T03:11:50Z\" level=info msg=\"starting signal loop\" namespace=moby pid=6852 runtime=io.containerd.runc.v2\n"
	Feb 29 03:11:50 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:50.492057982Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Feb 29 03:11:50 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:50.492452600Z" level=info msg="Daemon shutdown complete"
	Feb 29 03:11:50 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:50.492510003Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Feb 29 03:11:50 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:50.492562405Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Feb 29 03:11:50 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:50.520457812Z" level=warning msg="failed to get endpoint_count map for scope local: open : no such file or directory"
	Feb 29 03:11:50 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:50.520625820Z" level=warning msg="failed to get endpoint_count map for scope local: open : no such file or directory"
	Feb 29 03:11:50 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:50.529926856Z" level=error msg="b3156d5f814068f54d8737ccb9d0a0fa5079fa26836dfa799683ee9ea24a6189 cleanup: failed to delete container from containerd: grpc: the client connection is closing: context canceled"
	Feb 29 03:11:50 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:50.530073362Z" level=error msg="Handler for POST /v1.40/containers/b3156d5f814068f54d8737ccb9d0a0fa5079fa26836dfa799683ee9ea24a6189/start returned error: failed to update store for object type *libnetwork.endpoint: open : no such file or directory"
	Feb 29 03:11:50 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:50.841818066Z" level=warning msg="failed to get endpoint_count map for scope local: open : no such file or directory"
	Feb 29 03:11:50 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:50.842080779Z" level=warning msg="failed to get endpoint_count map for scope local: open : no such file or directory"
	Feb 29 03:11:50 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:50.855554310Z" level=error msg="716542c1cd5a006ea6c8ac9be060d1402778b558e5bcc724a6906a2501611a2c cleanup: failed to delete container from containerd: grpc: the client connection is closing: context canceled"
	Feb 29 03:11:50 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:50.855872225Z" level=error msg="Handler for POST /v1.40/containers/716542c1cd5a006ea6c8ac9be060d1402778b558e5bcc724a6906a2501611a2c/start returned error: failed to update store for object type *libnetwork.endpoint: open : no such file or directory"
	Feb 29 03:11:50 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:50.873552653Z" level=error msg="bdc096b632dd9eab747e0a0f9f8dec52f0f7aaf0020dbf5ea16fb7c3b29aeca0 cleanup: failed to delete container from containerd: grpc: the client connection is closing: context canceled"
	Feb 29 03:11:50 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:50.873792664Z" level=error msg="Handler for POST /v1.40/containers/bdc096b632dd9eab747e0a0f9f8dec52f0f7aaf0020dbf5ea16fb7c3b29aeca0/start returned error: grpc: the client connection is closing: context canceled"
	Feb 29 03:11:50 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:50.894413630Z" level=warning msg="failed to retrieve containerd version: rpc error: code = Canceled desc = grpc: the client connection is closing"
	Feb 29 03:11:50 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:50.895496281Z" level=warning msg="failed to get endpoint_count map for scope local: open : no such file or directory"
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: docker.service: Succeeded.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: docker.service: Unit process 6250 (containerd-shim) remains running after unit stopped.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: docker.service: Unit process 6253 (containerd-shim) remains running after unit stopped.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: docker.service: Unit process 6328 (containerd-shim) remains running after unit stopped.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: docker.service: Unit process 6432 (containerd-shim) remains running after unit stopped.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: docker.service: Unit process 6496 (containerd-shim) remains running after unit stopped.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: docker.service: Unit process 6540 (containerd-shim) remains running after unit stopped.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: docker.service: Unit process 6579 (containerd-shim) remains running after unit stopped.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: docker.service: Unit process 6639 (containerd-shim) remains running after unit stopped.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: docker.service: Unit process 6702 (containerd-shim) remains running after unit stopped.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: docker.service: Unit process 6763 (containerd-shim) remains running after unit stopped.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: docker.service: Found left-over process 6250 (containerd-shim) in control group while starting unit. Ignoring.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: docker.service: Found left-over process 6253 (containerd-shim) in control group while starting unit. Ignoring.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: docker.service: Found left-over process 6328 (containerd-shim) in control group while starting unit. Ignoring.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: docker.service: Found left-over process 6432 (containerd-shim) in control group while starting unit. Ignoring.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: docker.service: Found left-over process 6496 (containerd-shim) in control group while starting unit. Ignoring.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: docker.service: Found left-over process 6540 (containerd-shim) in control group while starting unit. Ignoring.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: docker.service: Found left-over process 6579 (containerd-shim) in control group while starting unit. Ignoring.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: docker.service: Found left-over process 6639 (containerd-shim) in control group while starting unit. Ignoring.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: docker.service: Found left-over process 6702 (containerd-shim) in control group while starting unit. Ignoring.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: docker.service: Found left-over process 6763 (containerd-shim) in control group while starting unit. Ignoring.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 03:11:51 running-upgrade-537900 dockerd[6875]: time="2024-02-29T03:11:51.559117853Z" level=info msg="Starting up"
	Feb 29 03:11:51 running-upgrade-537900 dockerd[6875]: time="2024-02-29T03:11:51.567204196Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Feb 29 03:11:51 running-upgrade-537900 dockerd[6875]: time="2024-02-29T03:11:51.567314107Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Feb 29 03:11:51 running-upgrade-537900 dockerd[6875]: time="2024-02-29T03:11:51.567352811Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Feb 29 03:11:51 running-upgrade-537900 dockerd[6875]: time="2024-02-29T03:11:51.567369613Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Feb 29 03:11:51 running-upgrade-537900 dockerd[6875]: time="2024-02-29T03:11:51.568083688Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Feb 29 03:11:52 running-upgrade-537900 dockerd[6875]: time="2024-02-29T03:11:52.568541802Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Feb 29 03:11:54 running-upgrade-537900 dockerd[6875]: time="2024-02-29T03:11:54.144450646Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Feb 29 03:11:56 running-upgrade-537900 dockerd[6875]: time="2024-02-29T03:11:56.266956703Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Feb 29 03:11:58 running-upgrade-537900 dockerd[6875]: time="2024-02-29T03:11:58.908318381Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Feb 29 03:12:01 running-upgrade-537900 dockerd[6875]: time="2024-02-29T03:12:01.995296788Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Feb 29 03:12:05 running-upgrade-537900 dockerd[6875]: time="2024-02-29T03:12:05.409995046Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Feb 29 03:12:08 running-upgrade-537900 dockerd[6875]: time="2024-02-29T03:12:08.883048513Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Feb 29 03:12:12 running-upgrade-537900 dockerd[6875]: time="2024-02-29T03:12:12.406221562Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Feb 29 03:12:15 running-upgrade-537900 dockerd[6875]: time="2024-02-29T03:12:15.061003421Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Feb 29 03:12:18 running-upgrade-537900 dockerd[6875]: time="2024-02-29T03:12:18.563451916Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Feb 29 03:12:21 running-upgrade-537900 dockerd[6875]: time="2024-02-29T03:12:21.820066848Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Feb 29 03:12:25 running-upgrade-537900 dockerd[6875]: time="2024-02-29T03:12:25.234684708Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Feb 29 03:12:27 running-upgrade-537900 dockerd[6875]: time="2024-02-29T03:12:27.852580166Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Feb 29 03:12:31 running-upgrade-537900 dockerd[6875]: time="2024-02-29T03:12:31.109596124Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Feb 29 03:12:34 running-upgrade-537900 dockerd[6875]: time="2024-02-29T03:12:34.461151853Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Feb 29 03:12:37 running-upgrade-537900 dockerd[6875]: time="2024-02-29T03:12:37.641872393Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Feb 29 03:12:41 running-upgrade-537900 dockerd[6875]: time="2024-02-29T03:12:41.099596362Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Feb 29 03:12:43 running-upgrade-537900 dockerd[6875]: time="2024-02-29T03:12:43.638990199Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Feb 29 03:12:47 running-upgrade-537900 dockerd[6875]: time="2024-02-29T03:12:47.203587018Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Feb 29 03:12:49 running-upgrade-537900 dockerd[6875]: time="2024-02-29T03:12:49.694195323Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Feb 29 03:12:51 running-upgrade-537900 dockerd[6875]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Feb 29 03:12:51 running-upgrade-537900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Feb 29 03:12:51 running-upgrade-537900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Feb 29 03:12:51 running-upgrade-537900 systemd[1]: docker.service: Unit process 6250 (containerd-shim) remains running after unit stopped.
	Feb 29 03:12:51 running-upgrade-537900 systemd[1]: docker.service: Unit process 6253 (containerd-shim) remains running after unit stopped.
	Feb 29 03:12:51 running-upgrade-537900 systemd[1]: docker.service: Unit process 6328 (containerd-shim) remains running after unit stopped.
	Feb 29 03:12:51 running-upgrade-537900 systemd[1]: docker.service: Unit process 6432 (containerd-shim) remains running after unit stopped.
	Feb 29 03:12:51 running-upgrade-537900 systemd[1]: docker.service: Unit process 6496 (containerd-shim) remains running after unit stopped.
	Feb 29 03:12:51 running-upgrade-537900 systemd[1]: docker.service: Unit process 6540 (containerd-shim) remains running after unit stopped.
	Feb 29 03:12:51 running-upgrade-537900 systemd[1]: docker.service: Unit process 6579 (containerd-shim) remains running after unit stopped.
	Feb 29 03:12:51 running-upgrade-537900 systemd[1]: docker.service: Unit process 6639 (containerd-shim) remains running after unit stopped.
	Feb 29 03:12:51 running-upgrade-537900 systemd[1]: docker.service: Unit process 6702 (containerd-shim) remains running after unit stopped.
	Feb 29 03:12:51 running-upgrade-537900 systemd[1]: docker.service: Unit process 6763 (containerd-shim) remains running after unit stopped.
	Feb 29 03:12:51 running-upgrade-537900 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Thu 2024-02-29 03:02:58 UTC, ends at Thu 2024-02-29 03:12:51 UTC. --
	Feb 29 03:03:48 running-upgrade-537900 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.432532450Z" level=info msg="Starting up"
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.435372915Z" level=info msg="libcontainerd: started new containerd process" pid=680
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.435448841Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.435460645Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.435491055Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.435512563Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48Z" level=warning msg="deprecated version : `1`, please switch to version `2`"
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.476465382Z" level=info msg="starting containerd" revision=212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 version=v1.6.4
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.501809396Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.502197528Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.504743193Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.504853230Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.505564472Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.505682612Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.505708021Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.505721125Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.505825861Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.506243803Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.506577616Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.506691755Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.506767181Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.506862813Z" level=info msg="metadata content store policy set" policy=shared
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.517090089Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.517131604Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.517157312Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.517215432Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.517234338Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.517316967Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.517425203Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.517449111Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.517467518Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.517483823Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.517499228Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.517515834Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.517666385Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.517956184Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.518730547Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.518849187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.518894403Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.518990335Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.519224915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.519249523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.519264428Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.519306543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.519328250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.519342655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.519356260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.519373466Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.519444790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.519552126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.519570833Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.519584537Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.519601543Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.519614948Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.519633954Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin"
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.519954063Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.520246262Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.520544564Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Feb 29 03:03:48 running-upgrade-537900 dockerd[680]: time="2024-02-29T03:03:48.520572173Z" level=info msg="containerd successfully booted in 0.048229s"
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.529750192Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.529882337Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.529906646Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.529918450Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.531845505Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.531953041Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.531974148Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.531984752Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.552917366Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.552973385Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.552982589Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.552989491Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.552996493Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.553003196Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.553484859Z" level=info msg="Loading containers: start."
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.675465518Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.753020378Z" level=info msg="Loading containers: done."
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.770907457Z" level=info msg="Docker daemon" commit=f756502 graphdriver(s)=overlay2 version=20.10.16
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.771029299Z" level=info msg="Daemon has completed initialization"
	Feb 29 03:03:48 running-upgrade-537900 systemd[1]: Started Docker Application Container Engine.
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.812772686Z" level=info msg="API listen on [::]:2376"
	Feb 29 03:03:48 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:03:48.830697078Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 29 03:04:29 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:04:29.052516661Z" level=info msg="Processing signal 'terminated'"
	Feb 29 03:04:29 running-upgrade-537900 systemd[1]: Stopping Docker Application Container Engine...
	Feb 29 03:04:29 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:04:29.053553105Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Feb 29 03:04:29 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:04:29.053993124Z" level=info msg="Daemon shutdown complete"
	Feb 29 03:04:29 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:04:29.054050726Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Feb 29 03:04:29 running-upgrade-537900 dockerd[674]: time="2024-02-29T03:04:29.054076927Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Feb 29 03:04:30 running-upgrade-537900 systemd[1]: docker.service: Succeeded.
	Feb 29 03:04:30 running-upgrade-537900 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 03:04:30 running-upgrade-537900 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 03:04:30 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:30.128057091Z" level=info msg="Starting up"
	Feb 29 03:04:30 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:30.130485094Z" level=info msg="libcontainerd: started new containerd process" pid=945
	Feb 29 03:04:30 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:30.130677402Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Feb 29 03:04:30 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:30.130729205Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Feb 29 03:04:30 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:30.130780707Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Feb 29 03:04:30 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:30.130828209Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30Z" level=warning msg="deprecated version : `1`, please switch to version `2`"
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.167743271Z" level=info msg="starting containerd" revision=212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 version=v1.6.4
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.188724360Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.188870766Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.191696485Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.191815190Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.192239408Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.192340213Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.192362714Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.192376614Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.192407616Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.192730329Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.193396457Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.193434659Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.193459560Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.193471661Z" level=info msg="metadata content store policy set" policy=shared
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.193605666Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.193627967Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.193642468Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.193671469Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.193689470Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.193705770Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.193729071Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.193747572Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.193764073Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.193780274Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.193795974Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.193810575Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.193946281Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.194235393Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.195032127Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.195153632Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.195175433Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.195255836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.195296238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.195313639Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.195328139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.195345840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.195360741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.195373841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.195387942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.195409743Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.195453944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.195469245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.195485846Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.195502247Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.195519247Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.195531948Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.195554549Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin"
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.195823060Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.195961566Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.196024369Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Feb 29 03:04:30 running-upgrade-537900 dockerd[945]: time="2024-02-29T03:04:30.196055270Z" level=info msg="containerd successfully booted in 0.029249s"
	Feb 29 03:04:30 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:30.208789309Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Feb 29 03:04:30 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:30.208826711Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Feb 29 03:04:30 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:30.208846411Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Feb 29 03:04:30 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:30.208856312Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Feb 29 03:04:30 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:30.210872497Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Feb 29 03:04:30 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:30.210909099Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Feb 29 03:04:30 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:30.210927100Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Feb 29 03:04:30 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:30.210938400Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Feb 29 03:04:31 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:31.755904002Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Feb 29 03:04:31 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:31.756017307Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Feb 29 03:04:31 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:31.756029407Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	Feb 29 03:04:31 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:31.756035908Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	Feb 29 03:04:31 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:31.756042608Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	Feb 29 03:04:31 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:31.756048908Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	Feb 29 03:04:31 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:31.756299119Z" level=info msg="Loading containers: start."
	Feb 29 03:04:31 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:31.908165748Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 29 03:04:31 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:31.971491028Z" level=info msg="Loading containers: done."
	Feb 29 03:04:31 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:31.991348169Z" level=info msg="Docker daemon" commit=f756502 graphdriver(s)=overlay2 version=20.10.16
	Feb 29 03:04:31 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:31.991547377Z" level=info msg="Daemon has completed initialization"
	Feb 29 03:04:32 running-upgrade-537900 systemd[1]: Started Docker Application Container Engine.
	Feb 29 03:04:32 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:32.030207114Z" level=info msg="API listen on [::]:2376"
	Feb 29 03:04:32 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:32.038173251Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 29 03:04:32 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:32.764481998Z" level=info msg="Processing signal 'terminated'"
	Feb 29 03:04:32 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:32.765818354Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Feb 29 03:04:32 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:32.766189870Z" level=info msg="Daemon shutdown complete"
	Feb 29 03:04:32 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:32.766210171Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Feb 29 03:04:32 running-upgrade-537900 dockerd[938]: time="2024-02-29T03:04:32.766215271Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Feb 29 03:04:32 running-upgrade-537900 systemd[1]: Stopping Docker Application Container Engine...
	Feb 29 03:04:33 running-upgrade-537900 systemd[1]: docker.service: Succeeded.
	Feb 29 03:04:33 running-upgrade-537900 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 03:04:33 running-upgrade-537900 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:33.827021277Z" level=info msg="Starting up"
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:33.830460323Z" level=info msg="libcontainerd: started new containerd process" pid=1125
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:33.830657131Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:33.830716934Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:33.830779237Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:33.830825639Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33Z" level=warning msg="deprecated version : `1`, please switch to version `2`"
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.867013670Z" level=info msg="starting containerd" revision=212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 version=v1.6.4
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.888628385Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.888774392Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.891255097Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.891378302Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.891658214Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.891812920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.891834421Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.891850522Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.891880723Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.892089632Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.892362243Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.892463848Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.892488549Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.892500949Z" level=info msg="metadata content store policy set" policy=shared
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.892641555Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.892752560Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.892771861Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.892799162Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.892815463Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.892832863Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.892848364Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.892863565Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.892879265Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.892894466Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.892964369Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.892982070Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.893047272Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.893174078Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.893690300Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.893810205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.893830206Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.893880308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.893984212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.894003213Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.894018014Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.894032314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.894046215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.894064216Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.894078116Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.894095117Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.894135619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.894151819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.894165520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.894179320Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.894196221Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.894209622Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.894231123Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin"
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.894441632Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.895150562Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.895214964Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:33.895232265Z" level=info msg="containerd successfully booted in 0.030064s"
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:33.911056935Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:33.911174840Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:33.911197741Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:33.911207841Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:33.913324831Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:33.913359932Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:33.913376333Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:33.913387334Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:33.926878805Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:33.926963808Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:33.927010410Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:33.927046112Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:33.927083213Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:33.927117015Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	Feb 29 03:04:33 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:33.927652137Z" level=info msg="Loading containers: start."
	Feb 29 03:04:34 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:34.080519109Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 29 03:04:34 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:34.153555200Z" level=info msg="Loading containers: done."
	Feb 29 03:04:34 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:34.173228933Z" level=info msg="Docker daemon" commit=f756502 graphdriver(s)=overlay2 version=20.10.16
	Feb 29 03:04:34 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:34.173295636Z" level=info msg="Daemon has completed initialization"
	Feb 29 03:04:34 running-upgrade-537900 systemd[1]: Started Docker Application Container Engine.
	Feb 29 03:04:34 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:34.202551175Z" level=info msg="API listen on [::]:2376"
	Feb 29 03:04:34 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:04:34.220482134Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 29 03:04:43 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:43.155910453Z" level=error msg="(*service).Write failed" error="rpc error: code = Unavailable desc = ref moby/1/index-sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db locked for 336.871821ms (since 2024-02-29 03:04:42.632941708 +0000 UTC m=+8.789125921): unavailable" expected="sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db" ref="index-sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db" total=3829
	Feb 29 03:04:43 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:43.955357969Z" level=error msg="(*service).Write failed" error="rpc error: code = Unavailable desc = ref moby/1/index-sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db locked for 818.969628ms (since 2024-02-29 03:04:42.632941708 +0000 UTC m=+8.789125921): unavailable" expected="sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db" ref="index-sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db" total=3829
	Feb 29 03:04:44 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:44.384609868Z" level=error msg="(*service).Write failed" error="rpc error: code = Unavailable desc = ref moby/1/index-sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db locked for 1.578454486s (since 2024-02-29 03:04:42.632941708 +0000 UTC m=+8.789125921): unavailable" expected="sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db" ref="index-sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db" total=3829
	Feb 29 03:04:44 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:44.679496269Z" level=error msg="(*service).Write failed" error="rpc error: code = Unavailable desc = ref moby/1/manifest-sha256:c2280d2f5f56cf9c9a01bb64b2db4651e35efd6d62a54dcfc12049fe6449c5e4 locked for 179.83842ms (since 2024-02-29 03:04:44.393444696 +0000 UTC m=+10.549628909): unavailable" expected="sha256:c2280d2f5f56cf9c9a01bb64b2db4651e35efd6d62a54dcfc12049fe6449c5e4" ref="manifest-sha256:c2280d2f5f56cf9c9a01bb64b2db4651e35efd6d62a54dcfc12049fe6449c5e4" total=526
	Feb 29 03:04:45 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:45.144278313Z" level=error msg="(*service).Write failed" error="rpc error: code = Unavailable desc = ref moby/1/manifest-sha256:c2280d2f5f56cf9c9a01bb64b2db4651e35efd6d62a54dcfc12049fe6449c5e4 locked for 699.159789ms (since 2024-02-29 03:04:44.393444696 +0000 UTC m=+10.549628909): unavailable" expected="sha256:c2280d2f5f56cf9c9a01bb64b2db4651e35efd6d62a54dcfc12049fe6449c5e4" ref="manifest-sha256:c2280d2f5f56cf9c9a01bb64b2db4651e35efd6d62a54dcfc12049fe6449c5e4" total=526
	Feb 29 03:04:45 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:45.694811003Z" level=error msg="(*service).Write failed" error="rpc error: code = Unavailable desc = ref moby/1/manifest-sha256:c2280d2f5f56cf9c9a01bb64b2db4651e35efd6d62a54dcfc12049fe6449c5e4 locked for 1.09765383s (since 2024-02-29 03:04:44.393444696 +0000 UTC m=+10.549628909): unavailable" expected="sha256:c2280d2f5f56cf9c9a01bb64b2db4651e35efd6d62a54dcfc12049fe6449c5e4" ref="manifest-sha256:c2280d2f5f56cf9c9a01bb64b2db4651e35efd6d62a54dcfc12049fe6449c5e4" total=526
	Feb 29 03:04:46 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:46.000370618Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:04:46 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:46.000438724Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:04:46 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:46.000457225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:04:46 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:46.002454980Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/773f96f2c596239afc3b8af80ba08f2a5803da84e416253cc3c4dab992e38975 pid=1741 runtime=io.containerd.runc.v2
	Feb 29 03:04:46 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:46.059917837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:04:46 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:46.060047147Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:04:46 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:46.060080849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:04:46 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:46.060339969Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/cb750ae096b167ce6fee9cf173cb81570f5537bf2ce1871dc4ace5666be9b8d1 pid=1768 runtime=io.containerd.runc.v2
	Feb 29 03:04:46 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:46.077929234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:04:46 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:46.078043142Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:04:46 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:46.078093546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:04:46 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:46.079009117Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/ceec2146a01886ad08a8bec23f70c867db9126e792739af78a14521271551204 pid=1795 runtime=io.containerd.runc.v2
	Feb 29 03:04:46 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:46.083947700Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:04:46 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:46.084020106Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:04:46 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:46.084033007Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:04:46 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:46.084302728Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/5102125c0aabb2aa4110dbb4a8b04ee5c20c5fea40cc0e07d22b3827ef618453 pid=1801 runtime=io.containerd.runc.v2
	Feb 29 03:04:46 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:46.960334669Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:04:46 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:46.960546085Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:04:46 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:46.960650493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:04:46 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:46.960882011Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/422f92f237f7d7afcf10ac08338b70aa3a1cf9efdf0f8601bd50c16d38fa4828 pid=1898 runtime=io.containerd.runc.v2
	Feb 29 03:04:47 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:47.253898679Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:04:47 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:47.254053890Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:04:47 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:47.254128096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:04:47 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:47.254765844Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/5293717322e32a9e61bbd336b4753c6a4280ba9cef910b4d3056cf40eb5c0979 pid=1944 runtime=io.containerd.runc.v2
	Feb 29 03:04:47 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:47.340720621Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:04:47 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:47.340858531Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:04:47 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:47.340891134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:04:47 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:47.341106450Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/92dac45ab5c2f109824c750eafd3c495cf506b504a69837317d7d46961d64206 pid=1970 runtime=io.containerd.runc.v2
	Feb 29 03:04:47 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:47.554239810Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:04:47 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:47.554861757Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:04:47 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:47.554919362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:04:47 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:04:47.555141278Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/a2d0c35d6a25f91ded896db2ee51950e9a6f267b1468b1c12a2c82db98159256 pid=2016 runtime=io.containerd.runc.v2
	Feb 29 03:05:10 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:10.432638390Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:05:10 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:10.432717294Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:05:10 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:10.432730595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:05:10 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:10.440076861Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/c42db7c66d49cef965a9fd02cdbd92342aa3c79e241a1338e1af1effcf3128de pid=2456 runtime=io.containerd.runc.v2
	Feb 29 03:05:10 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:10.489270211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:05:10 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:10.489427619Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:05:10 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:10.489507623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:05:10 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:10.489755936Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/5de768bd612e0ab6153d234026f2084b2df337978efebf564440d5105d66d6b8 pid=2486 runtime=io.containerd.runc.v2
	Feb 29 03:05:10 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:10.772906841Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:05:10 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:10.773089050Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:05:10 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:10.773277660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:05:10 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:10.774042198Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/545389747d412f0742c43b7fcf75b6eb1dc396fc89c10008816439edf7093499 pid=2535 runtime=io.containerd.runc.v2
	Feb 29 03:05:11 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:11.220725347Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:05:11 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:11.220871454Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:05:11 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:11.220906456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:05:11 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:11.221338977Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/73cf4890d9c30179ad7024211fe1d9c04a0818069a177fd99d7dcefb26ddbb18 pid=2580 runtime=io.containerd.runc.v2
	Feb 29 03:05:11 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:11.834103117Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:05:11 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:11.834208822Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:05:11 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:11.834327828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:05:11 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:11.834636543Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/c0391089a5369d3560f38de3fb981bf4902edace5ec511aaf9e7e4d5f2220fbd pid=2643 runtime=io.containerd.runc.v2
	Feb 29 03:05:11 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:11.844667238Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:05:11 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:11.845077258Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:05:11 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:11.845250467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:05:11 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:11.848915047Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/b3dda49d7ad2e826f09fa11996880d8d59e9bd993e89fd85743b27f6452efb21 pid=2649 runtime=io.containerd.runc.v2
	Feb 29 03:05:42 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:05:42.373981748Z" level=info msg="ignoring event" container=b3dda49d7ad2e826f09fa11996880d8d59e9bd993e89fd85743b27f6452efb21 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:05:42 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:42.375060695Z" level=info msg="shim disconnected" id=b3dda49d7ad2e826f09fa11996880d8d59e9bd993e89fd85743b27f6452efb21
	Feb 29 03:05:42 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:42.375224302Z" level=warning msg="cleaning up after shim disconnected" id=b3dda49d7ad2e826f09fa11996880d8d59e9bd993e89fd85743b27f6452efb21 namespace=moby
	Feb 29 03:05:42 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:42.375241003Z" level=info msg="cleaning up dead shim"
	Feb 29 03:05:42 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:42.392420046Z" level=warning msg="cleanup warnings time=\"2024-02-29T03:05:42Z\" level=info msg=\"starting signal loop\" namespace=moby pid=3000 runtime=io.containerd.runc.v2\n"
	Feb 29 03:05:43 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:43.058370866Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:05:43 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:43.058510472Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:05:43 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:43.058525873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:05:43 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:05:43.059099097Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/7dcafbe93c312107c70523f6252f495dd771ede92533ff03a100d8cc9de33c8e pid=3020 runtime=io.containerd.runc.v2
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:40.323610529Z" level=info msg="Processing signal 'terminated'"
	Feb 29 03:11:40 running-upgrade-537900 systemd[1]: Stopping Docker Application Container Engine...
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:40.648543251Z" level=info msg="ignoring event" container=5102125c0aabb2aa4110dbb4a8b04ee5c20c5fea40cc0e07d22b3827ef618453 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.649748408Z" level=info msg="shim disconnected" id=5102125c0aabb2aa4110dbb4a8b04ee5c20c5fea40cc0e07d22b3827ef618453
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.650206529Z" level=warning msg="cleaning up after shim disconnected" id=5102125c0aabb2aa4110dbb4a8b04ee5c20c5fea40cc0e07d22b3827ef618453 namespace=moby
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.650269032Z" level=info msg="cleaning up dead shim"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:40.664476698Z" level=info msg="ignoring event" container=cb750ae096b167ce6fee9cf173cb81570f5537bf2ce1871dc4ace5666be9b8d1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.664896317Z" level=info msg="shim disconnected" id=cb750ae096b167ce6fee9cf173cb81570f5537bf2ce1871dc4ace5666be9b8d1
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.665412641Z" level=warning msg="cleaning up after shim disconnected" id=cb750ae096b167ce6fee9cf173cb81570f5537bf2ce1871dc4ace5666be9b8d1 namespace=moby
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.665528847Z" level=info msg="cleaning up dead shim"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:40.688381817Z" level=info msg="ignoring event" container=c42db7c66d49cef965a9fd02cdbd92342aa3c79e241a1338e1af1effcf3128de module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:40.688932443Z" level=info msg="ignoring event" container=92dac45ab5c2f109824c750eafd3c495cf506b504a69837317d7d46961d64206 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.689388465Z" level=info msg="shim disconnected" id=73cf4890d9c30179ad7024211fe1d9c04a0818069a177fd99d7dcefb26ddbb18
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.690071997Z" level=warning msg="cleaning up after shim disconnected" id=73cf4890d9c30179ad7024211fe1d9c04a0818069a177fd99d7dcefb26ddbb18 namespace=moby
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:40.690196002Z" level=info msg="ignoring event" container=73cf4890d9c30179ad7024211fe1d9c04a0818069a177fd99d7dcefb26ddbb18 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.690364310Z" level=info msg="cleaning up dead shim"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.689585574Z" level=info msg="shim disconnected" id=c42db7c66d49cef965a9fd02cdbd92342aa3c79e241a1338e1af1effcf3128de
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.691356657Z" level=warning msg="cleaning up after shim disconnected" id=c42db7c66d49cef965a9fd02cdbd92342aa3c79e241a1338e1af1effcf3128de namespace=moby
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.691463162Z" level=info msg="cleaning up dead shim"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.689643777Z" level=info msg="shim disconnected" id=92dac45ab5c2f109824c750eafd3c495cf506b504a69837317d7d46961d64206
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.691940384Z" level=warning msg="cleaning up after shim disconnected" id=92dac45ab5c2f109824c750eafd3c495cf506b504a69837317d7d46961d64206 namespace=moby
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.692045789Z" level=info msg="cleaning up dead shim"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.718256317Z" level=warning msg="cleanup warnings time=\"2024-02-29T03:11:40Z\" level=info msg=\"starting signal loop\" namespace=moby pid=6066 runtime=io.containerd.runc.v2\n"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.721344262Z" level=info msg="shim disconnected" id=545389747d412f0742c43b7fcf75b6eb1dc396fc89c10008816439edf7093499
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.721386264Z" level=warning msg="cleaning up after shim disconnected" id=545389747d412f0742c43b7fcf75b6eb1dc396fc89c10008816439edf7093499 namespace=moby
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.721397164Z" level=info msg="cleaning up dead shim"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:40.721813284Z" level=info msg="ignoring event" container=773f96f2c596239afc3b8af80ba08f2a5803da84e416253cc3c4dab992e38975 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.721948990Z" level=info msg="shim disconnected" id=773f96f2c596239afc3b8af80ba08f2a5803da84e416253cc3c4dab992e38975
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.722057295Z" level=warning msg="cleaning up after shim disconnected" id=773f96f2c596239afc3b8af80ba08f2a5803da84e416253cc3c4dab992e38975 namespace=moby
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.722085496Z" level=info msg="cleaning up dead shim"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:40.723517363Z" level=info msg="ignoring event" container=545389747d412f0742c43b7fcf75b6eb1dc396fc89c10008816439edf7093499 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:40.759466147Z" level=info msg="ignoring event" container=7dcafbe93c312107c70523f6252f495dd771ede92533ff03a100d8cc9de33c8e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.764466882Z" level=info msg="shim disconnected" id=7dcafbe93c312107c70523f6252f495dd771ede92533ff03a100d8cc9de33c8e
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.764540885Z" level=warning msg="cleaning up after shim disconnected" id=7dcafbe93c312107c70523f6252f495dd771ede92533ff03a100d8cc9de33c8e namespace=moby
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.764553086Z" level=info msg="cleaning up dead shim"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.784413316Z" level=info msg="shim disconnected" id=5293717322e32a9e61bbd336b4753c6a4280ba9cef910b4d3056cf40eb5c0979
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:40.785075247Z" level=info msg="ignoring event" container=ceec2146a01886ad08a8bec23f70c867db9126e792739af78a14521271551204 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:40.785126650Z" level=info msg="ignoring event" container=5293717322e32a9e61bbd336b4753c6a4280ba9cef910b4d3056cf40eb5c0979 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.785547869Z" level=warning msg="cleaning up after shim disconnected" id=5293717322e32a9e61bbd336b4753c6a4280ba9cef910b4d3056cf40eb5c0979 namespace=moby
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.785683876Z" level=info msg="cleaning up dead shim"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.815730683Z" level=info msg="shim disconnected" id=ceec2146a01886ad08a8bec23f70c867db9126e792739af78a14521271551204
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.815924792Z" level=warning msg="cleaning up after shim disconnected" id=ceec2146a01886ad08a8bec23f70c867db9126e792739af78a14521271551204 namespace=moby
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.816743931Z" level=info msg="cleaning up dead shim"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.819412856Z" level=warning msg="cleanup warnings time=\"2024-02-29T03:11:40Z\" level=info msg=\"starting signal loop\" namespace=moby pid=6049 runtime=io.containerd.runc.v2\n"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.820059286Z" level=warning msg="cleanup warnings time=\"2024-02-29T03:11:40Z\" level=info msg=\"starting signal loop\" namespace=moby pid=6113 runtime=io.containerd.runc.v2\n"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.820472805Z" level=warning msg="cleanup warnings time=\"2024-02-29T03:11:40Z\" level=info msg=\"starting signal loop\" namespace=moby pid=6085 runtime=io.containerd.runc.v2\n"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:40.821419750Z" level=info msg="ignoring event" container=5de768bd612e0ab6153d234026f2084b2df337978efebf564440d5105d66d6b8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.833228203Z" level=info msg="shim disconnected" id=5de768bd612e0ab6153d234026f2084b2df337978efebf564440d5105d66d6b8
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.833418312Z" level=warning msg="cleaning up after shim disconnected" id=5de768bd612e0ab6153d234026f2084b2df337978efebf564440d5105d66d6b8 namespace=moby
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.833516616Z" level=info msg="cleaning up dead shim"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.847650479Z" level=warning msg="cleanup warnings time=\"2024-02-29T03:11:40Z\" level=info msg=\"starting signal loop\" namespace=moby pid=6107 runtime=io.containerd.runc.v2\n"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.867504309Z" level=warning msg="cleanup warnings time=\"2024-02-29T03:11:40Z\" level=info msg=\"starting signal loop\" namespace=moby pid=6095 runtime=io.containerd.runc.v2\n"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.872211429Z" level=warning msg="cleanup warnings time=\"2024-02-29T03:11:40Z\" level=info msg=\"starting signal loop\" namespace=moby pid=6124 runtime=io.containerd.runc.v2\n"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.907481581Z" level=warning msg="cleanup warnings time=\"2024-02-29T03:11:40Z\" level=info msg=\"starting signal loop\" namespace=moby pid=6139 runtime=io.containerd.runc.v2\n"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.951194629Z" level=warning msg="cleanup warnings time=\"2024-02-29T03:11:40Z\" level=info msg=\"starting signal loop\" namespace=moby pid=6159 runtime=io.containerd.runc.v2\n"
	Feb 29 03:11:40 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:40.980023880Z" level=warning msg="cleanup warnings time=\"2024-02-29T03:11:40Z\" level=info msg=\"starting signal loop\" namespace=moby pid=6202 runtime=io.containerd.runc.v2\n"
	Feb 29 03:11:41 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:41.031655098Z" level=warning msg="cleanup warnings time=\"2024-02-29T03:11:40Z\" level=info msg=\"starting signal loop\" namespace=moby pid=6190 runtime=io.containerd.runc.v2\ntime=\"2024-02-29T03:11:41Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n"
	Feb 29 03:11:41 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:41.247107592Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:11:41 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:41.247193996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:11:41 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:41.247208596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:11:41 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:41.247106792Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:11:41 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:41.247382304Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:11:41 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:41.247415606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:11:41 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:41.247697719Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/e165fece89825bcd351eeff1c5f15fa179878aec4c8ad74404c2cf6dbdd932d1 pid=6253 runtime=io.containerd.runc.v2
	Feb 29 03:11:41 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:41.248113339Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/d313a243e210fdabe07a20c7ac394065898c69d451eddba99d89e1a44de81677 pid=6250 runtime=io.containerd.runc.v2
	Feb 29 03:11:41 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:41.512266413Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:11:41 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:41.512625630Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:11:41 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:41.512753736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:11:41 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:41.513358464Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/0077f00dcb9548c5f0ab56cc1bd6f826e5c2450c861ba2b3bb6ed3697015254f pid=6328 runtime=io.containerd.runc.v2
	Feb 29 03:11:41 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:41.879128899Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:11:41 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:41.879212803Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:11:41 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:41.879226604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:11:41 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:41.880126146Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/f56bf3f32279114e10e226efa0f4177a335f83a05142f8f0fd5e76412dd963ab pid=6389 runtime=io.containerd.runc.v2
	Feb 29 03:11:42 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:42.246737320Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:11:42 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:42.247116538Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:11:42 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:42.247409052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:11:42 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:42.247745568Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/587325f9692391e02fb1a5419728434cf3e2ea91bebdf36177abf66e448b7f4c pid=6432 runtime=io.containerd.runc.v2
	Feb 29 03:11:42 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:42.265388194Z" level=info msg="shim disconnected" id=a2d0c35d6a25f91ded896db2ee51950e9a6f267b1468b1c12a2c82db98159256
	Feb 29 03:11:42 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:42.265729210Z" level=warning msg="cleaning up after shim disconnected" id=a2d0c35d6a25f91ded896db2ee51950e9a6f267b1468b1c12a2c82db98159256 namespace=moby
	Feb 29 03:11:42 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:42.265963021Z" level=info msg="cleaning up dead shim"
	Feb 29 03:11:42 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:42.267147276Z" level=info msg="ignoring event" container=a2d0c35d6a25f91ded896db2ee51950e9a6f267b1468b1c12a2c82db98159256 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:11:42 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:42.295200791Z" level=warning msg="cleanup warnings time=\"2024-02-29T03:11:42Z\" level=info msg=\"starting signal loop\" namespace=moby pid=6448 runtime=io.containerd.runc.v2\n"
	Feb 29 03:11:42 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:42.501506055Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:11:42 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:42.501562658Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:11:42 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:42.501575759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:11:42 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:42.501733366Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/9d9279755591c2ba85c71ff575d9496bc06790ed444879144b0438ce9424fcf8 pid=6496 runtime=io.containerd.runc.v2
	Feb 29 03:11:43 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:43.362025567Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:11:43 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:43.362104771Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:11:43 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:43.362118872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:11:43 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:43.362272179Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/a46421a03b8957a8c26e3459810f1277ee95ae1d970a6740270ee52807180780 pid=6540 runtime=io.containerd.runc.v2
	Feb 29 03:11:43 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:43.803769461Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:11:43 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:43.803919568Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:11:43 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:43.803958370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:11:43 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:43.804210982Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/d4267c7991613d41dfb3745520f7f25e2abc970a5a7ce256d137fa19372f70c6 pid=6579 runtime=io.containerd.runc.v2
	Feb 29 03:11:45 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:45.337663718Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:11:45 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:45.338378752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:11:45 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:45.338774470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:11:45 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:45.339770917Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/0668669812f03181e4cd95cbe256cd6f42a4542d45070354aefbfa8a34ee094d pid=6639 runtime=io.containerd.runc.v2
	Feb 29 03:11:45 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:45.626930869Z" level=info msg="ignoring event" container=c0391089a5369d3560f38de3fb981bf4902edace5ec511aaf9e7e4d5f2220fbd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:11:45 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:45.627526397Z" level=info msg="shim disconnected" id=c0391089a5369d3560f38de3fb981bf4902edace5ec511aaf9e7e4d5f2220fbd
	Feb 29 03:11:45 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:45.627596100Z" level=warning msg="cleaning up after shim disconnected" id=c0391089a5369d3560f38de3fb981bf4902edace5ec511aaf9e7e4d5f2220fbd namespace=moby
	Feb 29 03:11:45 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:45.627609101Z" level=info msg="cleaning up dead shim"
	Feb 29 03:11:45 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:45.643970468Z" level=warning msg="cleanup warnings time=\"2024-02-29T03:11:45Z\" level=info msg=\"starting signal loop\" namespace=moby pid=6675 runtime=io.containerd.runc.v2\n"
	Feb 29 03:11:45 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:45.805140618Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:11:45 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:45.805287025Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:11:45 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:45.805351828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:11:45 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:45.805705044Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/608f97f2a83aa2d7ba3be31b0e5eef6abacf435bdade0f9e96650ee184792ad8 pid=6702 runtime=io.containerd.runc.v2
	Feb 29 03:11:46 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:46.739423385Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:11:46 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:46.739516590Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:11:46 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:46.739530290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:11:46 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:46.740516536Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/6d629a0cc6494682cdd14f644ba1747b9f47af78dc599ab243734757742b7276 pid=6763 runtime=io.containerd.runc.v2
	Feb 29 03:11:47 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:47.114559459Z" level=info msg="ignoring event" container=f56bf3f32279114e10e226efa0f4177a335f83a05142f8f0fd5e76412dd963ab module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:11:47 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:47.115542705Z" level=info msg="shim disconnected" id=f56bf3f32279114e10e226efa0f4177a335f83a05142f8f0fd5e76412dd963ab
	Feb 29 03:11:47 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:47.115969125Z" level=warning msg="cleaning up after shim disconnected" id=f56bf3f32279114e10e226efa0f4177a335f83a05142f8f0fd5e76412dd963ab namespace=moby
	Feb 29 03:11:47 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:47.116114232Z" level=info msg="cleaning up dead shim"
	Feb 29 03:11:47 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:47.133060226Z" level=warning msg="cleanup warnings time=\"2024-02-29T03:11:47Z\" level=info msg=\"starting signal loop\" namespace=moby pid=6816 runtime=io.containerd.runc.v2\n"
	Feb 29 03:11:50 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:50.432011169Z" level=info msg="shim disconnected" id=422f92f237f7d7afcf10ac08338b70aa3a1cf9efdf0f8601bd50c16d38fa4828
	Feb 29 03:11:50 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:50.432076672Z" level=warning msg="cleaning up after shim disconnected" id=422f92f237f7d7afcf10ac08338b70aa3a1cf9efdf0f8601bd50c16d38fa4828 namespace=moby
	Feb 29 03:11:50 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:50.432088772Z" level=info msg="cleaning up dead shim"
	Feb 29 03:11:50 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:50.432877409Z" level=info msg="ignoring event" container=422f92f237f7d7afcf10ac08338b70aa3a1cf9efdf0f8601bd50c16d38fa4828 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:11:50 running-upgrade-537900 dockerd[1125]: time="2024-02-29T03:11:50.448549443Z" level=warning msg="cleanup warnings time=\"2024-02-29T03:11:50Z\" level=info msg=\"starting signal loop\" namespace=moby pid=6852 runtime=io.containerd.runc.v2\n"
	Feb 29 03:11:50 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:50.492057982Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Feb 29 03:11:50 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:50.492452600Z" level=info msg="Daemon shutdown complete"
	Feb 29 03:11:50 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:50.492510003Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Feb 29 03:11:50 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:50.492562405Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Feb 29 03:11:50 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:50.520457812Z" level=warning msg="failed to get endpoint_count map for scope local: open : no such file or directory"
	Feb 29 03:11:50 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:50.520625820Z" level=warning msg="failed to get endpoint_count map for scope local: open : no such file or directory"
	Feb 29 03:11:50 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:50.529926856Z" level=error msg="b3156d5f814068f54d8737ccb9d0a0fa5079fa26836dfa799683ee9ea24a6189 cleanup: failed to delete container from containerd: grpc: the client connection is closing: context canceled"
	Feb 29 03:11:50 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:50.530073362Z" level=error msg="Handler for POST /v1.40/containers/b3156d5f814068f54d8737ccb9d0a0fa5079fa26836dfa799683ee9ea24a6189/start returned error: failed to update store for object type *libnetwork.endpoint: open : no such file or directory"
	Feb 29 03:11:50 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:50.841818066Z" level=warning msg="failed to get endpoint_count map for scope local: open : no such file or directory"
	Feb 29 03:11:50 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:50.842080779Z" level=warning msg="failed to get endpoint_count map for scope local: open : no such file or directory"
	Feb 29 03:11:50 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:50.855554310Z" level=error msg="716542c1cd5a006ea6c8ac9be060d1402778b558e5bcc724a6906a2501611a2c cleanup: failed to delete container from containerd: grpc: the client connection is closing: context canceled"
	Feb 29 03:11:50 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:50.855872225Z" level=error msg="Handler for POST /v1.40/containers/716542c1cd5a006ea6c8ac9be060d1402778b558e5bcc724a6906a2501611a2c/start returned error: failed to update store for object type *libnetwork.endpoint: open : no such file or directory"
	Feb 29 03:11:50 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:50.873552653Z" level=error msg="bdc096b632dd9eab747e0a0f9f8dec52f0f7aaf0020dbf5ea16fb7c3b29aeca0 cleanup: failed to delete container from containerd: grpc: the client connection is closing: context canceled"
	Feb 29 03:11:50 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:50.873792664Z" level=error msg="Handler for POST /v1.40/containers/bdc096b632dd9eab747e0a0f9f8dec52f0f7aaf0020dbf5ea16fb7c3b29aeca0/start returned error: grpc: the client connection is closing: context canceled"
	Feb 29 03:11:50 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:50.894413630Z" level=warning msg="failed to retrieve containerd version: rpc error: code = Canceled desc = grpc: the client connection is closing"
	Feb 29 03:11:50 running-upgrade-537900 dockerd[1119]: time="2024-02-29T03:11:50.895496281Z" level=warning msg="failed to get endpoint_count map for scope local: open : no such file or directory"
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: docker.service: Succeeded.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: docker.service: Unit process 6250 (containerd-shim) remains running after unit stopped.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: docker.service: Unit process 6253 (containerd-shim) remains running after unit stopped.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: docker.service: Unit process 6328 (containerd-shim) remains running after unit stopped.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: docker.service: Unit process 6432 (containerd-shim) remains running after unit stopped.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: docker.service: Unit process 6496 (containerd-shim) remains running after unit stopped.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: docker.service: Unit process 6540 (containerd-shim) remains running after unit stopped.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: docker.service: Unit process 6579 (containerd-shim) remains running after unit stopped.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: docker.service: Unit process 6639 (containerd-shim) remains running after unit stopped.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: docker.service: Unit process 6702 (containerd-shim) remains running after unit stopped.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: docker.service: Unit process 6763 (containerd-shim) remains running after unit stopped.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: docker.service: Found left-over process 6250 (containerd-shim) in control group while starting unit. Ignoring.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: docker.service: Found left-over process 6253 (containerd-shim) in control group while starting unit. Ignoring.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: docker.service: Found left-over process 6328 (containerd-shim) in control group while starting unit. Ignoring.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: docker.service: Found left-over process 6432 (containerd-shim) in control group while starting unit. Ignoring.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: docker.service: Found left-over process 6496 (containerd-shim) in control group while starting unit. Ignoring.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: docker.service: Found left-over process 6540 (containerd-shim) in control group while starting unit. Ignoring.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: docker.service: Found left-over process 6579 (containerd-shim) in control group while starting unit. Ignoring.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: docker.service: Found left-over process 6639 (containerd-shim) in control group while starting unit. Ignoring.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: docker.service: Found left-over process 6702 (containerd-shim) in control group while starting unit. Ignoring.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: docker.service: Found left-over process 6763 (containerd-shim) in control group while starting unit. Ignoring.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Feb 29 03:11:51 running-upgrade-537900 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 03:11:51 running-upgrade-537900 dockerd[6875]: time="2024-02-29T03:11:51.559117853Z" level=info msg="Starting up"
	Feb 29 03:11:51 running-upgrade-537900 dockerd[6875]: time="2024-02-29T03:11:51.567204196Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Feb 29 03:11:51 running-upgrade-537900 dockerd[6875]: time="2024-02-29T03:11:51.567314107Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Feb 29 03:11:51 running-upgrade-537900 dockerd[6875]: time="2024-02-29T03:11:51.567352811Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Feb 29 03:11:51 running-upgrade-537900 dockerd[6875]: time="2024-02-29T03:11:51.567369613Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Feb 29 03:11:51 running-upgrade-537900 dockerd[6875]: time="2024-02-29T03:11:51.568083688Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Feb 29 03:11:52 running-upgrade-537900 dockerd[6875]: time="2024-02-29T03:11:52.568541802Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Feb 29 03:11:54 running-upgrade-537900 dockerd[6875]: time="2024-02-29T03:11:54.144450646Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Feb 29 03:11:56 running-upgrade-537900 dockerd[6875]: time="2024-02-29T03:11:56.266956703Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Feb 29 03:11:58 running-upgrade-537900 dockerd[6875]: time="2024-02-29T03:11:58.908318381Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Feb 29 03:12:01 running-upgrade-537900 dockerd[6875]: time="2024-02-29T03:12:01.995296788Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Feb 29 03:12:05 running-upgrade-537900 dockerd[6875]: time="2024-02-29T03:12:05.409995046Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Feb 29 03:12:08 running-upgrade-537900 dockerd[6875]: time="2024-02-29T03:12:08.883048513Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Feb 29 03:12:12 running-upgrade-537900 dockerd[6875]: time="2024-02-29T03:12:12.406221562Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Feb 29 03:12:15 running-upgrade-537900 dockerd[6875]: time="2024-02-29T03:12:15.061003421Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Feb 29 03:12:18 running-upgrade-537900 dockerd[6875]: time="2024-02-29T03:12:18.563451916Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Feb 29 03:12:21 running-upgrade-537900 dockerd[6875]: time="2024-02-29T03:12:21.820066848Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Feb 29 03:12:25 running-upgrade-537900 dockerd[6875]: time="2024-02-29T03:12:25.234684708Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Feb 29 03:12:27 running-upgrade-537900 dockerd[6875]: time="2024-02-29T03:12:27.852580166Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Feb 29 03:12:31 running-upgrade-537900 dockerd[6875]: time="2024-02-29T03:12:31.109596124Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Feb 29 03:12:34 running-upgrade-537900 dockerd[6875]: time="2024-02-29T03:12:34.461151853Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Feb 29 03:12:37 running-upgrade-537900 dockerd[6875]: time="2024-02-29T03:12:37.641872393Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Feb 29 03:12:41 running-upgrade-537900 dockerd[6875]: time="2024-02-29T03:12:41.099596362Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Feb 29 03:12:43 running-upgrade-537900 dockerd[6875]: time="2024-02-29T03:12:43.638990199Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Feb 29 03:12:47 running-upgrade-537900 dockerd[6875]: time="2024-02-29T03:12:47.203587018Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Feb 29 03:12:49 running-upgrade-537900 dockerd[6875]: time="2024-02-29T03:12:49.694195323Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Feb 29 03:12:51 running-upgrade-537900 dockerd[6875]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Feb 29 03:12:51 running-upgrade-537900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Feb 29 03:12:51 running-upgrade-537900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Feb 29 03:12:51 running-upgrade-537900 systemd[1]: docker.service: Unit process 6250 (containerd-shim) remains running after unit stopped.
	Feb 29 03:12:51 running-upgrade-537900 systemd[1]: docker.service: Unit process 6253 (containerd-shim) remains running after unit stopped.
	Feb 29 03:12:51 running-upgrade-537900 systemd[1]: docker.service: Unit process 6328 (containerd-shim) remains running after unit stopped.
	Feb 29 03:12:51 running-upgrade-537900 systemd[1]: docker.service: Unit process 6432 (containerd-shim) remains running after unit stopped.
	Feb 29 03:12:51 running-upgrade-537900 systemd[1]: docker.service: Unit process 6496 (containerd-shim) remains running after unit stopped.
	Feb 29 03:12:51 running-upgrade-537900 systemd[1]: docker.service: Unit process 6540 (containerd-shim) remains running after unit stopped.
	Feb 29 03:12:51 running-upgrade-537900 systemd[1]: docker.service: Unit process 6579 (containerd-shim) remains running after unit stopped.
	Feb 29 03:12:51 running-upgrade-537900 systemd[1]: docker.service: Unit process 6639 (containerd-shim) remains running after unit stopped.
	Feb 29 03:12:51 running-upgrade-537900 systemd[1]: docker.service: Unit process 6702 (containerd-shim) remains running after unit stopped.
	Feb 29 03:12:51 running-upgrade-537900 systemd[1]: docker.service: Unit process 6763 (containerd-shim) remains running after unit stopped.
	Feb 29 03:12:51 running-upgrade-537900 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0229 03:12:51.510317    3808 out.go:239] * 
	* 
	W0229 03:12:51.511932    3808 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 03:12:51.514650    3808 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-windows-amd64.exe start -p running-upgrade-537900 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: exit status 90
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-02-29 03:12:52.0257993 +0000 UTC m=+9031.003164301
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p running-upgrade-537900 -n running-upgrade-537900
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p running-upgrade-537900 -n running-upgrade-537900: exit status 6 (12.1402766s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 03:12:52.172573    7828 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0229 03:13:04.109421    7828 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-537900" does not appear in C:\Users\jenkins.minikube5\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "running-upgrade-537900" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-537900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-537900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-537900: (1m15.2284319s)
--- FAIL: TestRunningBinaryUpgrade (929.84s)

                                                
                                    
x
+
TestKubernetesUpgrade (787.67s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-398700 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-398700 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperv: exit status 109 (10m9.9182284s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-398700] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18063
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting control plane node kubernetes-upgrade-398700 in cluster kubernetes-upgrade-398700
	* Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 24.0.7 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 03:00:31.223879   12724 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0229 03:00:31.273683   12724 out.go:291] Setting OutFile to fd 1940 ...
	I0229 03:00:31.274292   12724 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 03:00:31.274292   12724 out.go:304] Setting ErrFile to fd 1944...
	I0229 03:00:31.274292   12724 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 03:00:31.299036   12724 out.go:298] Setting JSON to false
	I0229 03:00:31.303607   12724 start.go:129] hostinfo: {"hostname":"minikube5","uptime":271857,"bootTime":1708903773,"procs":199,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0229 03:00:31.303607   12724 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 03:00:31.304744   12724 out.go:177] * [kubernetes-upgrade-398700] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0229 03:00:31.305133   12724 notify.go:220] Checking for updates...
	I0229 03:00:31.305659   12724 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 03:00:31.306292   12724 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 03:00:31.307096   12724 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0229 03:00:31.308069   12724 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 03:00:31.308310   12724 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 03:00:31.310692   12724 config.go:182] Loaded profile config "force-systemd-env-812500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 03:00:31.311199   12724 config.go:182] Loaded profile config "offline-docker-419900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 03:00:31.311571   12724 config.go:182] Loaded profile config "running-upgrade-537900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0229 03:00:31.311738   12724 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 03:00:36.649706   12724 out.go:177] * Using the hyperv driver based on user configuration
	I0229 03:00:36.650476   12724 start.go:299] selected driver: hyperv
	I0229 03:00:36.650540   12724 start.go:903] validating driver "hyperv" against <nil>
	I0229 03:00:36.650579   12724 start.go:914] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 03:00:36.697273   12724 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 03:00:36.698273   12724 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0229 03:00:36.698273   12724 cni.go:84] Creating CNI manager for ""
	I0229 03:00:36.698273   12724 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0229 03:00:36.698273   12724 start_flags.go:323] config:
	{Name:kubernetes-upgrade-398700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-398700 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 03:00:36.698273   12724 iso.go:125] acquiring lock: {Name:mk91f2ee29fbed5605669750e8cfa308a1229357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 03:00:36.699487   12724 out.go:177] * Starting control plane node kubernetes-upgrade-398700 in cluster kubernetes-upgrade-398700
	I0229 03:00:36.700738   12724 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0229 03:00:36.701138   12724 preload.go:148] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0229 03:00:36.701138   12724 cache.go:56] Caching tarball of preloaded images
	I0229 03:00:36.701357   12724 preload.go:174] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 03:00:36.701721   12724 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0229 03:00:36.701953   12724 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-398700\config.json ...
	I0229 03:00:36.702745   12724 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-398700\config.json: {Name:mke2de5039425372141f3eb2ed2f6148e87958fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 03:00:36.704092   12724 start.go:365] acquiring machines lock for kubernetes-upgrade-398700: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 03:04:07.007941   12724 start.go:369] acquired machines lock for "kubernetes-upgrade-398700" in 3m30.2920989s
	I0229 03:04:07.008627   12724 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-398700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Ku
bernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-398700 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 03:04:07.008791   12724 start.go:125] createHost starting for "" (driver="hyperv")
	I0229 03:04:07.010177   12724 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0229 03:04:07.010676   12724 start.go:159] libmachine.API.Create for "kubernetes-upgrade-398700" (driver="hyperv")
	I0229 03:04:07.010702   12724 client.go:168] LocalClient.Create starting
	I0229 03:04:07.011076   12724 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0229 03:04:07.011619   12724 main.go:141] libmachine: Decoding PEM data...
	I0229 03:04:07.011747   12724 main.go:141] libmachine: Parsing certificate...
	I0229 03:04:07.011930   12724 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0229 03:04:07.011982   12724 main.go:141] libmachine: Decoding PEM data...
	I0229 03:04:07.011982   12724 main.go:141] libmachine: Parsing certificate...
	I0229 03:04:07.011982   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0229 03:04:08.869664   12724 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0229 03:04:08.869664   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:04:08.870719   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0229 03:04:10.561310   12724 main.go:141] libmachine: [stdout =====>] : False
	
	I0229 03:04:10.562165   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:04:10.562165   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0229 03:04:12.053658   12724 main.go:141] libmachine: [stdout =====>] : True
	
	I0229 03:04:12.053658   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:04:12.053774   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0229 03:04:15.556715   12724 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0229 03:04:15.556715   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:04:15.558616   12724 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0229 03:04:15.945960   12724 main.go:141] libmachine: Creating SSH key...
	I0229 03:04:16.079395   12724 main.go:141] libmachine: Creating VM...
	I0229 03:04:16.079395   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0229 03:04:19.238780   12724 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0229 03:04:19.238867   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:04:19.238956   12724 main.go:141] libmachine: Using switch "Default Switch"
	I0229 03:04:19.239085   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0229 03:04:20.952042   12724 main.go:141] libmachine: [stdout =====>] : True
	
	I0229 03:04:20.952042   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:04:20.952984   12724 main.go:141] libmachine: Creating VHD
	I0229 03:04:20.953005   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kubernetes-upgrade-398700\fixed.vhd' -SizeBytes 10MB -Fixed
	I0229 03:04:24.634383   12724 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kubernetes-upgrade-398700\
	                          fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 6D4DD970-CE71-4311-B92A-DAD8F94B8119
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0229 03:04:24.634414   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:04:24.634414   12724 main.go:141] libmachine: Writing magic tar header
	I0229 03:04:24.634494   12724 main.go:141] libmachine: Writing SSH key tar header
	I0229 03:04:24.643069   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kubernetes-upgrade-398700\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kubernetes-upgrade-398700\disk.vhd' -VHDType Dynamic -DeleteSource
	I0229 03:04:27.924596   12724 main.go:141] libmachine: [stdout =====>] : 
	I0229 03:04:27.924596   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:04:27.924596   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kubernetes-upgrade-398700\disk.vhd' -SizeBytes 20000MB
	I0229 03:04:30.508497   12724 main.go:141] libmachine: [stdout =====>] : 
	I0229 03:04:30.508751   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:04:30.508999   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM kubernetes-upgrade-398700 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kubernetes-upgrade-398700' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0229 03:04:33.909968   12724 main.go:141] libmachine: [stdout =====>] : 
	Name                      State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----                      ----- ----------- ----------------- ------   ------             -------
	kubernetes-upgrade-398700 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0229 03:04:33.909968   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:04:33.909968   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName kubernetes-upgrade-398700 -DynamicMemoryEnabled $false
	I0229 03:04:36.077907   12724 main.go:141] libmachine: [stdout =====>] : 
	I0229 03:04:36.077975   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:04:36.078030   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor kubernetes-upgrade-398700 -Count 2
	I0229 03:04:38.173877   12724 main.go:141] libmachine: [stdout =====>] : 
	I0229 03:04:38.173877   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:04:38.174843   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName kubernetes-upgrade-398700 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kubernetes-upgrade-398700\boot2docker.iso'
	I0229 03:04:40.629676   12724 main.go:141] libmachine: [stdout =====>] : 
	I0229 03:04:40.630675   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:04:40.630893   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName kubernetes-upgrade-398700 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kubernetes-upgrade-398700\disk.vhd'
	I0229 03:04:43.124048   12724 main.go:141] libmachine: [stdout =====>] : 
	I0229 03:04:43.124048   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:04:43.124125   12724 main.go:141] libmachine: Starting VM...
	I0229 03:04:43.124125   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM kubernetes-upgrade-398700
	I0229 03:04:45.818168   12724 main.go:141] libmachine: [stdout =====>] : 
	I0229 03:04:45.818168   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:04:45.818168   12724 main.go:141] libmachine: Waiting for host to start...
	I0229 03:04:45.818168   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-398700 ).state
	I0229 03:04:47.981012   12724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 03:04:47.981093   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:04:47.981179   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-398700 ).networkadapters[0]).ipaddresses[0]
	I0229 03:04:50.409657   12724 main.go:141] libmachine: [stdout =====>] : 
	I0229 03:04:50.410059   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:04:51.426564   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-398700 ).state
	I0229 03:04:53.537854   12724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 03:04:53.538067   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:04:53.538067   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-398700 ).networkadapters[0]).ipaddresses[0]
	I0229 03:04:55.944086   12724 main.go:141] libmachine: [stdout =====>] : 
	I0229 03:04:55.944086   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:04:56.953660   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-398700 ).state
	I0229 03:04:59.243421   12724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 03:04:59.243421   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:04:59.243511   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-398700 ).networkadapters[0]).ipaddresses[0]
	I0229 03:05:01.811433   12724 main.go:141] libmachine: [stdout =====>] : 
	I0229 03:05:01.811910   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:05:02.828232   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-398700 ).state
	I0229 03:05:05.044937   12724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 03:05:05.045444   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:05:05.045535   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-398700 ).networkadapters[0]).ipaddresses[0]
	I0229 03:05:07.489709   12724 main.go:141] libmachine: [stdout =====>] : 
	I0229 03:05:07.489709   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:05:08.496953   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-398700 ).state
	I0229 03:05:10.686118   12724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 03:05:10.686550   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:05:10.686620   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-398700 ).networkadapters[0]).ipaddresses[0]
	I0229 03:05:13.286124   12724 main.go:141] libmachine: [stdout =====>] : 172.19.7.22
	
	I0229 03:05:13.286769   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:05:13.286769   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-398700 ).state
	I0229 03:05:15.927120   12724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 03:05:15.927120   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:05:15.927120   12724 machine.go:88] provisioning docker machine ...
	I0229 03:05:15.927120   12724 buildroot.go:166] provisioning hostname "kubernetes-upgrade-398700"
	I0229 03:05:15.927357   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-398700 ).state
	I0229 03:05:17.963442   12724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 03:05:17.963442   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:05:17.963442   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-398700 ).networkadapters[0]).ipaddresses[0]
	I0229 03:05:20.411813   12724 main.go:141] libmachine: [stdout =====>] : 172.19.7.22
	
	I0229 03:05:20.412808   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:05:20.417536   12724 main.go:141] libmachine: Using SSH client type: native
	I0229 03:05:20.418145   12724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.7.22 22 <nil> <nil>}
	I0229 03:05:20.418145   12724 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-398700 && echo "kubernetes-upgrade-398700" | sudo tee /etc/hostname
	I0229 03:05:20.581425   12724 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-398700
	
	I0229 03:05:20.581425   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-398700 ).state
	I0229 03:05:22.619637   12724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 03:05:22.619721   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:05:22.619935   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-398700 ).networkadapters[0]).ipaddresses[0]
	I0229 03:05:25.026851   12724 main.go:141] libmachine: [stdout =====>] : 172.19.7.22
	
	I0229 03:05:25.027671   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:05:25.031464   12724 main.go:141] libmachine: Using SSH client type: native
	I0229 03:05:25.031921   12724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.7.22 22 <nil> <nil>}
	I0229 03:05:25.031921   12724 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-398700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-398700/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-398700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 03:05:25.177783   12724 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 03:05:25.177783   12724 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0229 03:05:25.177783   12724 buildroot.go:174] setting up certificates
	I0229 03:05:25.179067   12724 provision.go:83] configureAuth start
	I0229 03:05:25.179067   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-398700 ).state
	I0229 03:05:27.192490   12724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 03:05:27.192739   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:05:27.192820   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-398700 ).networkadapters[0]).ipaddresses[0]
	I0229 03:05:29.583176   12724 main.go:141] libmachine: [stdout =====>] : 172.19.7.22
	
	I0229 03:05:29.583176   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:05:29.583176   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-398700 ).state
	I0229 03:05:31.597898   12724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 03:05:31.597898   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:05:31.597898   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-398700 ).networkadapters[0]).ipaddresses[0]
	I0229 03:05:34.008311   12724 main.go:141] libmachine: [stdout =====>] : 172.19.7.22
	
	I0229 03:05:34.008535   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:05:34.008535   12724 provision.go:138] copyHostCerts
	I0229 03:05:34.008882   12724 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0229 03:05:34.008942   12724 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0229 03:05:34.009287   12724 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0229 03:05:34.010052   12724 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0229 03:05:34.010052   12724 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0229 03:05:34.010581   12724 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0229 03:05:34.011549   12724 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0229 03:05:34.011549   12724 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0229 03:05:34.011860   12724 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1675 bytes)
	I0229 03:05:34.012682   12724 provision.go:112] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kubernetes-upgrade-398700 san=[172.19.7.22 172.19.7.22 localhost 127.0.0.1 minikube kubernetes-upgrade-398700]
	I0229 03:05:34.325536   12724 provision.go:172] copyRemoteCerts
	I0229 03:05:34.334359   12724 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 03:05:34.334359   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-398700 ).state
	I0229 03:05:36.348853   12724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 03:05:36.348853   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:05:36.348939   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-398700 ).networkadapters[0]).ipaddresses[0]
	I0229 03:05:38.771360   12724 main.go:141] libmachine: [stdout =====>] : 172.19.7.22
	
	I0229 03:05:38.771937   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:05:38.772306   12724 sshutil.go:53] new ssh client: &{IP:172.19.7.22 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kubernetes-upgrade-398700\id_rsa Username:docker}
	I0229 03:05:38.880699   12724 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5460855s)
	I0229 03:05:38.882105   12724 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 03:05:38.939817   12724 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1249 bytes)
	I0229 03:05:38.986231   12724 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 03:05:39.033121   12724 provision.go:86] duration metric: configureAuth took 13.8532047s
	I0229 03:05:39.033121   12724 buildroot.go:189] setting minikube options for container-runtime
	I0229 03:05:39.033203   12724 config.go:182] Loaded profile config "kubernetes-upgrade-398700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0229 03:05:39.033203   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-398700 ).state
	I0229 03:05:41.055922   12724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 03:05:41.055922   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:05:41.055922   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-398700 ).networkadapters[0]).ipaddresses[0]
	I0229 03:05:43.486696   12724 main.go:141] libmachine: [stdout =====>] : 172.19.7.22
	
	I0229 03:05:43.486696   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:05:43.493592   12724 main.go:141] libmachine: Using SSH client type: native
	I0229 03:05:43.493592   12724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.7.22 22 <nil> <nil>}
	I0229 03:05:43.494141   12724 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 03:05:43.622878   12724 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0229 03:05:43.622878   12724 buildroot.go:70] root file system type: tmpfs
	I0229 03:05:43.622878   12724 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 03:05:43.623549   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-398700 ).state
	I0229 03:05:45.602084   12724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 03:05:45.602084   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:05:45.602084   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-398700 ).networkadapters[0]).ipaddresses[0]
	I0229 03:05:48.002369   12724 main.go:141] libmachine: [stdout =====>] : 172.19.7.22
	
	I0229 03:05:48.002585   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:05:48.007184   12724 main.go:141] libmachine: Using SSH client type: native
	I0229 03:05:48.007555   12724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.7.22 22 <nil> <nil>}
	I0229 03:05:48.007555   12724 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 03:05:48.170208   12724 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 03:05:48.170208   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-398700 ).state
	I0229 03:05:50.201808   12724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 03:05:50.201998   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:05:50.201998   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-398700 ).networkadapters[0]).ipaddresses[0]
	I0229 03:05:52.615673   12724 main.go:141] libmachine: [stdout =====>] : 172.19.7.22
	
	I0229 03:05:52.615673   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:05:52.620389   12724 main.go:141] libmachine: Using SSH client type: native
	I0229 03:05:52.620836   12724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.7.22 22 <nil> <nil>}
	I0229 03:05:52.620894   12724 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 03:05:53.653818   12724 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0229 03:05:53.653818   12724 machine.go:91] provisioned docker machine in 37.7245895s
	I0229 03:05:53.653818   12724 client.go:171] LocalClient.Create took 1m46.6370801s
	I0229 03:05:53.653818   12724 start.go:167] duration metric: libmachine.API.Create for "kubernetes-upgrade-398700" took 1m46.6371809s
	I0229 03:05:53.653818   12724 start.go:300] post-start starting for "kubernetes-upgrade-398700" (driver="hyperv")
	I0229 03:05:53.653818   12724 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 03:05:53.664782   12724 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 03:05:53.664782   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-398700 ).state
	I0229 03:05:55.705719   12724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 03:05:55.705719   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:05:55.705822   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-398700 ).networkadapters[0]).ipaddresses[0]
	I0229 03:05:58.142773   12724 main.go:141] libmachine: [stdout =====>] : 172.19.7.22
	
	I0229 03:05:58.143354   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:05:58.143975   12724 sshutil.go:53] new ssh client: &{IP:172.19.7.22 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kubernetes-upgrade-398700\id_rsa Username:docker}
	I0229 03:05:58.260500   12724 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5953704s)
	I0229 03:05:58.272278   12724 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 03:05:58.279389   12724 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 03:05:58.279389   12724 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0229 03:05:58.279926   12724 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0229 03:05:58.280615   12724 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem -> 33122.pem in /etc/ssl/certs
	I0229 03:05:58.292158   12724 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 03:05:58.310162   12724 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem --> /etc/ssl/certs/33122.pem (1708 bytes)
	I0229 03:05:58.358709   12724 start.go:303] post-start completed in 4.7046281s
	I0229 03:05:58.361220   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-398700 ).state
	I0229 03:06:00.387146   12724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 03:06:00.387146   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:06:00.387146   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-398700 ).networkadapters[0]).ipaddresses[0]
	I0229 03:06:02.811737   12724 main.go:141] libmachine: [stdout =====>] : 172.19.7.22
	
	I0229 03:06:02.811737   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:06:02.812675   12724 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-398700\config.json ...
	I0229 03:06:02.815145   12724 start.go:128] duration metric: createHost completed in 1m55.7998799s
	I0229 03:06:02.815145   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-398700 ).state
	I0229 03:06:04.834771   12724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 03:06:04.834771   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:06:04.835933   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-398700 ).networkadapters[0]).ipaddresses[0]
	I0229 03:06:07.298090   12724 main.go:141] libmachine: [stdout =====>] : 172.19.7.22
	
	I0229 03:06:07.298090   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:06:07.302303   12724 main.go:141] libmachine: Using SSH client type: native
	I0229 03:06:07.302768   12724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.7.22 22 <nil> <nil>}
	I0229 03:06:07.302768   12724 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0229 03:06:07.432167   12724 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709175967.597586605
	
	I0229 03:06:07.432167   12724 fix.go:206] guest clock: 1709175967.597586605
	I0229 03:06:07.432167   12724 fix.go:219] Guest: 2024-02-29 03:06:07.597586605 +0000 UTC Remote: 2024-02-29 03:06:02.815145 +0000 UTC m=+331.658495201 (delta=4.782441605s)
	I0229 03:06:07.432276   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-398700 ).state
	I0229 03:06:09.446383   12724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 03:06:09.446550   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:06:09.446654   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-398700 ).networkadapters[0]).ipaddresses[0]
	I0229 03:06:11.846760   12724 main.go:141] libmachine: [stdout =====>] : 172.19.7.22
	
	I0229 03:06:11.846760   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:06:11.851950   12724 main.go:141] libmachine: Using SSH client type: native
	I0229 03:06:11.852490   12724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.7.22 22 <nil> <nil>}
	I0229 03:06:11.852717   12724 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709175967
	I0229 03:06:12.001078   12724 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Feb 29 03:06:07 UTC 2024
	
	I0229 03:06:12.001169   12724 fix.go:226] clock set: Thu Feb 29 03:06:07 UTC 2024
	 (err=<nil>)
	I0229 03:06:12.001229   12724 start.go:83] releasing machines lock for "kubernetes-upgrade-398700", held for 2m4.9862418s
	I0229 03:06:12.001473   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-398700 ).state
	I0229 03:06:14.037027   12724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 03:06:14.037027   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:06:14.037450   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-398700 ).networkadapters[0]).ipaddresses[0]
	I0229 03:06:16.473237   12724 main.go:141] libmachine: [stdout =====>] : 172.19.7.22
	
	I0229 03:06:16.473921   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:06:16.477529   12724 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 03:06:16.477698   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-398700 ).state
	I0229 03:06:16.488939   12724 ssh_runner.go:195] Run: cat /version.json
	I0229 03:06:16.489055   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-398700 ).state
	I0229 03:06:18.594240   12724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 03:06:18.594240   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:06:18.595313   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-398700 ).networkadapters[0]).ipaddresses[0]
	I0229 03:06:18.600450   12724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 03:06:18.600624   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:06:18.600737   12724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-398700 ).networkadapters[0]).ipaddresses[0]
	I0229 03:06:21.080468   12724 main.go:141] libmachine: [stdout =====>] : 172.19.7.22
	
	I0229 03:06:21.081058   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:06:21.081413   12724 sshutil.go:53] new ssh client: &{IP:172.19.7.22 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kubernetes-upgrade-398700\id_rsa Username:docker}
	I0229 03:06:21.104061   12724 main.go:141] libmachine: [stdout =====>] : 172.19.7.22
	
	I0229 03:06:21.104061   12724 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:06:21.104977   12724 sshutil.go:53] new ssh client: &{IP:172.19.7.22 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kubernetes-upgrade-398700\id_rsa Username:docker}
	I0229 03:06:21.174863   12724 ssh_runner.go:235] Completed: cat /version.json: (4.6856618s)
	I0229 03:06:21.188410   12724 ssh_runner.go:195] Run: systemctl --version
	I0229 03:06:21.306384   12724 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.8284427s)
	I0229 03:06:21.316220   12724 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 03:06:21.325743   12724 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 03:06:21.335175   12724 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0229 03:06:21.361388   12724 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0229 03:06:21.392965   12724 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 03:06:21.392965   12724 start.go:475] detecting cgroup driver to use...
	I0229 03:06:21.392965   12724 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 03:06:21.442421   12724 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0229 03:06:21.476515   12724 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 03:06:21.496224   12724 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 03:06:21.505326   12724 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 03:06:21.534146   12724 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 03:06:21.576151   12724 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 03:06:21.606918   12724 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 03:06:21.645746   12724 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 03:06:21.682510   12724 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 03:06:21.712534   12724 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 03:06:21.740771   12724 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 03:06:21.768768   12724 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 03:06:21.973056   12724 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 03:06:22.005741   12724 start.go:475] detecting cgroup driver to use...
	I0229 03:06:22.016154   12724 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 03:06:22.051903   12724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 03:06:22.082434   12724 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 03:06:22.129962   12724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 03:06:22.177968   12724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 03:06:22.213702   12724 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 03:06:22.270286   12724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 03:06:22.295109   12724 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 03:06:22.338747   12724 ssh_runner.go:195] Run: which cri-dockerd
	I0229 03:06:22.353970   12724 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 03:06:22.372002   12724 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0229 03:06:22.414472   12724 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 03:06:22.606334   12724 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 03:06:22.795433   12724 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 03:06:22.795599   12724 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 03:06:22.840155   12724 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 03:06:23.039712   12724 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 03:06:24.610484   12724 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5706838s)
	I0229 03:06:24.617862   12724 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 03:06:24.659365   12724 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 03:06:24.694142   12724 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 24.0.7 ...
	I0229 03:06:24.694253   12724 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0229 03:06:24.698042   12724 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0229 03:06:24.698042   12724 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0229 03:06:24.698158   12724 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0229 03:06:24.698158   12724 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:a6:a3:c1 Flags:up|broadcast|multicast|running}
	I0229 03:06:24.701187   12724 ip.go:210] interface addr: fe80::fc78:4865:5cac:d448/64
	I0229 03:06:24.701246   12724 ip.go:210] interface addr: 172.19.0.1/20
	I0229 03:06:24.709356   12724 ssh_runner.go:195] Run: grep 172.19.0.1	host.minikube.internal$ /etc/hosts
	I0229 03:06:24.716270   12724 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 03:06:24.738285   12724 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0229 03:06:24.747272   12724 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 03:06:24.771242   12724 docker.go:685] Got preloaded images: 
	I0229 03:06:24.771347   12724 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0229 03:06:24.784559   12724 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0229 03:06:24.815328   12724 ssh_runner.go:195] Run: which lz4
	I0229 03:06:24.839917   12724 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0229 03:06:24.846990   12724 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 03:06:24.847214   12724 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0229 03:06:27.109871   12724 docker.go:649] Took 2.279565 seconds to copy over tarball
	I0229 03:06:27.117814   12724 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 03:06:37.064941   12724 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (9.9455689s)
	I0229 03:06:37.065203   12724 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 03:06:37.130333   12724 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0229 03:06:37.149953   12724 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0229 03:06:37.191831   12724 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 03:06:37.396142   12724 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 03:06:39.545276   12724 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.1490138s)
	I0229 03:06:39.552609   12724 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 03:06:39.578242   12724 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0229 03:06:39.578242   12724 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0229 03:06:39.578242   12724 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 03:06:39.599562   12724 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 03:06:39.607136   12724 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 03:06:39.608394   12724 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 03:06:39.612328   12724 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0229 03:06:39.614270   12724 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 03:06:39.615277   12724 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0229 03:06:39.615277   12724 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0229 03:06:39.616279   12724 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 03:06:39.620256   12724 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 03:06:39.623268   12724 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 03:06:39.625269   12724 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 03:06:39.627268   12724 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 03:06:39.627268   12724 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0229 03:06:39.628252   12724 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0229 03:06:39.628252   12724 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0229 03:06:39.630252   12724 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	W0229 03:06:39.712029   12724 image.go:187] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0229 03:06:39.789375   12724 image.go:187] authn lookup for registry.k8s.io/kube-proxy:v1.16.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0229 03:06:39.868708   12724 image.go:187] authn lookup for registry.k8s.io/kube-scheduler:v1.16.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0229 03:06:39.945587   12724 image.go:187] authn lookup for registry.k8s.io/kube-apiserver:v1.16.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0229 03:06:40.021063   12724 image.go:187] authn lookup for registry.k8s.io/etcd:3.3.15-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0229 03:06:40.116158   12724 image.go:187] authn lookup for registry.k8s.io/coredns:1.6.2 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0229 03:06:40.147316   12724 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	W0229 03:06:40.208540   12724 image.go:187] authn lookup for registry.k8s.io/pause:3.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0229 03:06:40.221283   12724 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0229 03:06:40.250282   12724 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0229 03:06:40.250282   12724 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.16.0 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.16.0
	I0229 03:06:40.250282   12724 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 03:06:40.257283   12724 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0229 03:06:40.265294   12724 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0229 03:06:40.282279   12724 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	W0229 03:06:40.288302   12724 image.go:187] authn lookup for registry.k8s.io/kube-controller-manager:v1.16.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0229 03:06:40.290283   12724 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0229 03:06:40.293304   12724 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.16.0
	I0229 03:06:40.296181   12724 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0229 03:06:40.296181   12724 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.16.0 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.16.0
	I0229 03:06:40.296181   12724 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 03:06:40.311102   12724 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0229 03:06:40.334552   12724 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0229 03:06:40.334552   12724 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.3.15-0 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.3.15-0
	I0229 03:06:40.334552   12724 docker.go:337] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0229 03:06:40.343290   12724 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0229 03:06:40.348566   12724 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0229 03:06:40.348566   12724 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.16.0 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.16.0
	I0229 03:06:40.348566   12724 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 03:06:40.358472   12724 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0229 03:06:40.359150   12724 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.16.0
	I0229 03:06:40.380264   12724 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.3.15-0
	I0229 03:06:40.385608   12724 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.16.0
	I0229 03:06:40.397904   12724 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0229 03:06:40.428269   12724 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0229 03:06:40.428269   12724 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.1 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1
	I0229 03:06:40.428269   12724 docker.go:337] Removing image: registry.k8s.io/pause:3.1
	I0229 03:06:40.436839   12724 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0229 03:06:40.439310   12724 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0229 03:06:40.465895   12724 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1
	I0229 03:06:40.465895   12724 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0229 03:06:40.465895   12724 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns:1.6.2 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.2
	I0229 03:06:40.465895   12724 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.2
	I0229 03:06:40.474318   12724 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0229 03:06:40.501782   12724 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.2
	I0229 03:06:40.584830   12724 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 03:06:40.609904   12724 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0229 03:06:40.609986   12724 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.16.0 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.16.0
	I0229 03:06:40.609986   12724 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 03:06:40.616824   12724 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 03:06:40.643262   12724 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.16.0
	I0229 03:06:40.643884   12724 cache_images.go:92] LoadImages completed in 1.0655822s
	W0229 03:06:40.644100   12724 out.go:239] X Unable to load cached images: loading cached images: CreateFile C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.16.0: The system cannot find the file specified.
	X Unable to load cached images: loading cached images: CreateFile C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.16.0: The system cannot find the file specified.
	I0229 03:06:40.651245   12724 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0229 03:06:40.686384   12724 cni.go:84] Creating CNI manager for ""
	I0229 03:06:40.686921   12724 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0229 03:06:40.686967   12724 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 03:06:40.687000   12724 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.7.22 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-398700 NodeName:kubernetes-upgrade-398700 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.7.22"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.7.22 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0229 03:06:40.687000   12724 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.7.22
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubernetes-upgrade-398700"
	  kubeletExtraArgs:
	    node-ip: 172.19.7.22
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.7.22"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: kubernetes-upgrade-398700
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://172.19.7.22:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 03:06:40.687000   12724 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-398700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.7.22
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-398700 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 03:06:40.696377   12724 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0229 03:06:40.714613   12724 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 03:06:40.724155   12724 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 03:06:40.741859   12724 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (350 bytes)
	I0229 03:06:40.772264   12724 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 03:06:40.802849   12724 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2176 bytes)
	I0229 03:06:40.846304   12724 ssh_runner.go:195] Run: grep 172.19.7.22	control-plane.minikube.internal$ /etc/hosts
	I0229 03:06:40.851820   12724 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.7.22	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 03:06:40.874715   12724 certs.go:56] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-398700 for IP: 172.19.7.22
	I0229 03:06:40.874823   12724 certs.go:190] acquiring lock for shared ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 03:06:40.876014   12724 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0229 03:06:40.876386   12724 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0229 03:06:40.876519   12724 certs.go:319] generating minikube-user signed cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-398700\client.key
	I0229 03:06:40.877173   12724 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-398700\client.crt with IP's: []
	I0229 03:06:40.979145   12724 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-398700\client.crt ...
	I0229 03:06:40.979145   12724 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-398700\client.crt: {Name:mkff83fb456a009751f249fb877dffd0dc99b098 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 03:06:40.980203   12724 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-398700\client.key ...
	I0229 03:06:40.980203   12724 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-398700\client.key: {Name:mka6d288bddce63f07bf5a5f4bad33d868fc4d6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 03:06:40.981342   12724 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-398700\apiserver.key.1251b87a
	I0229 03:06:40.981342   12724 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-398700\apiserver.crt.1251b87a with IP's: [172.19.7.22 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 03:06:41.153046   12724 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-398700\apiserver.crt.1251b87a ...
	I0229 03:06:41.153046   12724 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-398700\apiserver.crt.1251b87a: {Name:mk599189c0ace468d33d8b18b04c7abcfcdbeb51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 03:06:41.153046   12724 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-398700\apiserver.key.1251b87a ...
	I0229 03:06:41.153046   12724 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-398700\apiserver.key.1251b87a: {Name:mkae6878d7682382185be74d860aa3565903e917 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 03:06:41.153046   12724 certs.go:337] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-398700\apiserver.crt.1251b87a -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-398700\apiserver.crt
	I0229 03:06:41.166080   12724 certs.go:341] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-398700\apiserver.key.1251b87a -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-398700\apiserver.key
	I0229 03:06:41.167328   12724 certs.go:319] generating aggregator signed cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-398700\proxy-client.key
	I0229 03:06:41.167646   12724 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-398700\proxy-client.crt with IP's: []
	I0229 03:06:41.348354   12724 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-398700\proxy-client.crt ...
	I0229 03:06:41.467388   12724 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-398700\proxy-client.crt: {Name:mk6d6006f8ee4ace60bc02a692a3124d88f615a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 03:06:41.468367   12724 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-398700\proxy-client.key ...
	I0229 03:06:41.468367   12724 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-398700\proxy-client.key: {Name:mke7e01a3452046fd5f7c48842efb74b0c74b4c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 03:06:41.488171   12724 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3312.pem (1338 bytes)
	W0229 03:06:41.488258   12724 certs.go:433] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3312_empty.pem, impossibly tiny 0 bytes
	I0229 03:06:41.488258   12724 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0229 03:06:41.488258   12724 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0229 03:06:41.488954   12724 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0229 03:06:41.488954   12724 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0229 03:06:41.489598   12724 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem (1708 bytes)
	I0229 03:06:41.491316   12724 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-398700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 03:06:41.542799   12724 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-398700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 03:06:41.593359   12724 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-398700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 03:06:41.640527   12724 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-398700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 03:06:41.687647   12724 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 03:06:41.734205   12724 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 03:06:41.780457   12724 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 03:06:41.829288   12724 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 03:06:41.876618   12724 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem --> /usr/share/ca-certificates/33122.pem (1708 bytes)
	I0229 03:06:41.927942   12724 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 03:06:41.972768   12724 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3312.pem --> /usr/share/ca-certificates/3312.pem (1338 bytes)
	I0229 03:06:42.020212   12724 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 03:06:42.060753   12724 ssh_runner.go:195] Run: openssl version
	I0229 03:06:42.081839   12724 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 03:06:42.112872   12724 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 03:06:42.120663   12724 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 00:45 /usr/share/ca-certificates/minikubeCA.pem
	I0229 03:06:42.128936   12724 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 03:06:42.148489   12724 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 03:06:42.178303   12724 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3312.pem && ln -fs /usr/share/ca-certificates/3312.pem /etc/ssl/certs/3312.pem"
	I0229 03:06:42.209925   12724 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3312.pem
	I0229 03:06:42.217060   12724 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 00:59 /usr/share/ca-certificates/3312.pem
	I0229 03:06:42.225519   12724 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3312.pem
	I0229 03:06:42.248700   12724 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3312.pem /etc/ssl/certs/51391683.0"
	I0229 03:06:42.276970   12724 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/33122.pem && ln -fs /usr/share/ca-certificates/33122.pem /etc/ssl/certs/33122.pem"
	I0229 03:06:42.306968   12724 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/33122.pem
	I0229 03:06:42.315580   12724 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 00:59 /usr/share/ca-certificates/33122.pem
	I0229 03:06:42.324487   12724 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/33122.pem
	I0229 03:06:42.343844   12724 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/33122.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 03:06:42.373341   12724 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 03:06:42.380157   12724 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 03:06:42.380503   12724 kubeadm.go:404] StartCluster: {Name:kubernetes-upgrade-398700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.16.0 ClusterName:kubernetes-upgrade-398700 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.19.7.22 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:d
ocker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 03:06:42.387201   12724 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 03:06:42.427869   12724 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 03:06:42.457963   12724 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 03:06:42.485627   12724 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 03:06:42.502596   12724 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 03:06:42.503596   12724 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 03:06:42.823496   12724 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0229 03:06:42.868407   12724 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0229 03:06:43.222700   12724 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 03:08:42.696677   12724 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 03:08:42.696677   12724 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 03:08:42.698253   12724 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 03:08:42.698253   12724 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 03:08:42.698253   12724 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 03:08:42.699010   12724 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 03:08:42.699218   12724 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 03:08:42.699454   12724 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 03:08:42.699454   12724 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 03:08:42.700210   12724 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 03:08:42.700210   12724 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 03:08:42.701660   12724 out.go:204]   - Generating certificates and keys ...
	I0229 03:08:42.701890   12724 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 03:08:42.702354   12724 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 03:08:42.702354   12724 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 03:08:42.702354   12724 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0229 03:08:42.702354   12724 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0229 03:08:42.702354   12724 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0229 03:08:42.702933   12724 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0229 03:08:42.703027   12724 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-398700 localhost] and IPs [172.19.7.22 127.0.0.1 ::1]
	I0229 03:08:42.703027   12724 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0229 03:08:42.704392   12724 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-398700 localhost] and IPs [172.19.7.22 127.0.0.1 ::1]
	I0229 03:08:42.704392   12724 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 03:08:42.704392   12724 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 03:08:42.704392   12724 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0229 03:08:42.705052   12724 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 03:08:42.705166   12724 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 03:08:42.705224   12724 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 03:08:42.705419   12724 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 03:08:42.705743   12724 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 03:08:42.706006   12724 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 03:08:42.706954   12724 out.go:204]   - Booting up control plane ...
	I0229 03:08:42.707197   12724 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 03:08:42.707225   12724 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 03:08:42.707225   12724 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 03:08:42.707225   12724 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 03:08:42.707961   12724 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 03:08:42.708034   12724 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 03:08:42.708034   12724 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 03:08:42.708577   12724 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 03:08:42.708802   12724 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 03:08:42.709225   12724 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 03:08:42.709587   12724 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 03:08:42.710014   12724 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 03:08:42.710171   12724 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 03:08:42.710575   12724 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 03:08:42.710693   12724 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 03:08:42.711092   12724 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 03:08:42.711092   12724 kubeadm.go:322] 
	I0229 03:08:42.711092   12724 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 03:08:42.711092   12724 kubeadm.go:322] 	timed out waiting for the condition
	I0229 03:08:42.711092   12724 kubeadm.go:322] 
	I0229 03:08:42.711092   12724 kubeadm.go:322] This error is likely caused by:
	I0229 03:08:42.711092   12724 kubeadm.go:322] 	- The kubelet is not running
	I0229 03:08:42.711694   12724 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 03:08:42.711694   12724 kubeadm.go:322] 
	I0229 03:08:42.711694   12724 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 03:08:42.711694   12724 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 03:08:42.711694   12724 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 03:08:42.711694   12724 kubeadm.go:322] 
	I0229 03:08:42.712295   12724 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 03:08:42.712295   12724 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 03:08:42.712295   12724 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 03:08:42.712295   12724 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 03:08:42.713072   12724 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 03:08:42.713072   12724 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	W0229 03:08:42.713072   12724 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-398700 localhost] and IPs [172.19.7.22 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-398700 localhost] and IPs [172.19.7.22 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-398700 localhost] and IPs [172.19.7.22 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-398700 localhost] and IPs [172.19.7.22 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0229 03:08:42.713072   12724 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0229 03:08:43.212043   12724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 03:08:43.247838   12724 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 03:08:43.268220   12724 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 03:08:43.268309   12724 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 03:08:43.472061   12724 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0229 03:08:43.516604   12724 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0229 03:08:43.642084   12724 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 03:10:40.228063   12724 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 03:10:40.228205   12724 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 03:10:40.229594   12724 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 03:10:40.229594   12724 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 03:10:40.229594   12724 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 03:10:40.229594   12724 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 03:10:40.230593   12724 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 03:10:40.230593   12724 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 03:10:40.230593   12724 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 03:10:40.230593   12724 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 03:10:40.230593   12724 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 03:10:40.247632   12724 out.go:204]   - Generating certificates and keys ...
	I0229 03:10:40.248554   12724 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 03:10:40.248824   12724 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 03:10:40.249099   12724 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 03:10:40.249099   12724 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 03:10:40.249099   12724 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 03:10:40.249601   12724 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 03:10:40.249601   12724 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 03:10:40.249601   12724 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 03:10:40.249601   12724 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 03:10:40.250640   12724 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 03:10:40.250640   12724 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 03:10:40.250640   12724 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 03:10:40.250640   12724 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 03:10:40.250640   12724 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 03:10:40.251597   12724 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 03:10:40.251597   12724 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 03:10:40.251597   12724 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 03:10:40.295290   12724 out.go:204]   - Booting up control plane ...
	I0229 03:10:40.296111   12724 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 03:10:40.296329   12724 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 03:10:40.296643   12724 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 03:10:40.296997   12724 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 03:10:40.297723   12724 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 03:10:40.298021   12724 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 03:10:40.298301   12724 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 03:10:40.298974   12724 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 03:10:40.299473   12724 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 03:10:40.299874   12724 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 03:10:40.300479   12724 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 03:10:40.300902   12724 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 03:10:40.301070   12724 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 03:10:40.301389   12724 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 03:10:40.301389   12724 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 03:10:40.301982   12724 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 03:10:40.301982   12724 kubeadm.go:322] 
	I0229 03:10:40.301982   12724 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 03:10:40.301982   12724 kubeadm.go:322] 	timed out waiting for the condition
	I0229 03:10:40.301982   12724 kubeadm.go:322] 
	I0229 03:10:40.301982   12724 kubeadm.go:322] This error is likely caused by:
	I0229 03:10:40.302671   12724 kubeadm.go:322] 	- The kubelet is not running
	I0229 03:10:40.302671   12724 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 03:10:40.302671   12724 kubeadm.go:322] 
	I0229 03:10:40.302671   12724 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 03:10:40.303367   12724 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 03:10:40.303436   12724 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 03:10:40.303436   12724 kubeadm.go:322] 
	I0229 03:10:40.303436   12724 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 03:10:40.303436   12724 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 03:10:40.304311   12724 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 03:10:40.304311   12724 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 03:10:40.304311   12724 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 03:10:40.304311   12724 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 03:10:40.304311   12724 kubeadm.go:406] StartCluster complete in 3m57.9105603s
	I0229 03:10:40.311298   12724 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 03:10:40.337499   12724 logs.go:276] 0 containers: []
	W0229 03:10:40.337572   12724 logs.go:278] No container was found matching "kube-apiserver"
	I0229 03:10:40.345084   12724 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 03:10:40.372690   12724 logs.go:276] 0 containers: []
	W0229 03:10:40.372690   12724 logs.go:278] No container was found matching "etcd"
	I0229 03:10:40.379828   12724 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 03:10:40.404398   12724 logs.go:276] 0 containers: []
	W0229 03:10:40.404398   12724 logs.go:278] No container was found matching "coredns"
	I0229 03:10:40.413370   12724 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 03:10:40.440508   12724 logs.go:276] 0 containers: []
	W0229 03:10:40.440573   12724 logs.go:278] No container was found matching "kube-scheduler"
	I0229 03:10:40.447599   12724 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 03:10:40.478740   12724 logs.go:276] 0 containers: []
	W0229 03:10:40.478812   12724 logs.go:278] No container was found matching "kube-proxy"
	I0229 03:10:40.486596   12724 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 03:10:40.520071   12724 logs.go:276] 0 containers: []
	W0229 03:10:40.520071   12724 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 03:10:40.528627   12724 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 03:10:40.572497   12724 logs.go:276] 0 containers: []
	W0229 03:10:40.572617   12724 logs.go:278] No container was found matching "kindnet"
	I0229 03:10:40.572617   12724 logs.go:123] Gathering logs for container status ...
	I0229 03:10:40.572708   12724 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 03:10:40.680700   12724 logs.go:123] Gathering logs for kubelet ...
	I0229 03:10:40.680807   12724 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 03:10:40.756317   12724 logs.go:123] Gathering logs for dmesg ...
	I0229 03:10:40.756317   12724 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 03:10:40.784139   12724 logs.go:123] Gathering logs for describe nodes ...
	I0229 03:10:40.784227   12724 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 03:10:40.894382   12724 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 03:10:40.894382   12724 logs.go:123] Gathering logs for Docker ...
	I0229 03:10:40.894382   12724 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W0229 03:10:40.957078   12724 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 03:10:40.957078   12724 out.go:239] * 
	* 
	W0229 03:10:40.957078   12724 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 03:10:40.958135   12724 out.go:239] * 
	* 
	W0229 03:10:40.959389   12724 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 03:10:40.992640   12724 out.go:177] 
	W0229 03:10:40.994230   12724 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 03:10:40.994230   12724 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 03:10:40.994230   12724 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 03:10:40.995292   12724 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-398700 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperv: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-398700
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-398700: (23.5301135s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-398700 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-398700 status --format={{.Host}}: exit status 7 (2.3857575s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 03:11:05.085976    9408 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-398700 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-398700 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperv: exit status 80 (2m2.6839217s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-398700] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18063
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting control plane node kubernetes-upgrade-398700 in cluster kubernetes-upgrade-398700
	* Restarting existing hyperv VM for "kubernetes-upgrade-398700" ...
	* Restarting existing hyperv VM for "kubernetes-upgrade-398700" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 03:11:07.466211    8376 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0229 03:11:07.517212    8376 out.go:291] Setting OutFile to fd 1160 ...
	I0229 03:11:07.518206    8376 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 03:11:07.518206    8376 out.go:304] Setting ErrFile to fd 1908...
	I0229 03:11:07.518206    8376 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 03:11:07.538601    8376 out.go:298] Setting JSON to false
	I0229 03:11:07.543122    8376 start.go:129] hostinfo: {"hostname":"minikube5","uptime":272494,"bootTime":1708903773,"procs":199,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0229 03:11:07.543122    8376 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 03:11:07.544453    8376 out.go:177] * [kubernetes-upgrade-398700] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0229 03:11:07.545599    8376 notify.go:220] Checking for updates...
	I0229 03:11:07.546226    8376 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 03:11:07.546980    8376 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 03:11:07.547614    8376 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0229 03:11:07.548373    8376 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 03:11:07.548373    8376 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 03:11:07.550355    8376 config.go:182] Loaded profile config "kubernetes-upgrade-398700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0229 03:11:07.551357    8376 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 03:11:13.819319    8376 out.go:177] * Using the hyperv driver based on existing profile
	I0229 03:11:13.820164    8376 start.go:299] selected driver: hyperv
	I0229 03:11:13.820164    8376 start.go:903] validating driver "hyperv" against &{Name:kubernetes-upgrade-398700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-398700 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.19.7.22 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 03:11:13.820164    8376 start.go:914] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 03:11:13.892516    8376 cni.go:84] Creating CNI manager for ""
	I0229 03:11:13.892516    8376 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0229 03:11:13.892516    8376 start_flags.go:323] config:
	{Name:kubernetes-upgrade-398700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-3987
00 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.19.7.22 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 03:11:13.893473    8376 iso.go:125] acquiring lock: {Name:mk91f2ee29fbed5605669750e8cfa308a1229357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 03:11:13.894583    8376 out.go:177] * Starting control plane node kubernetes-upgrade-398700 in cluster kubernetes-upgrade-398700
	I0229 03:11:13.895586    8376 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0229 03:11:13.895586    8376 preload.go:148] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0229 03:11:13.895586    8376 cache.go:56] Caching tarball of preloaded images
	I0229 03:11:13.895586    8376 preload.go:174] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 03:11:13.896581    8376 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on docker
	I0229 03:11:13.896581    8376 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-398700\config.json ...
	I0229 03:11:13.899586    8376 start.go:365] acquiring machines lock for kubernetes-upgrade-398700: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 03:12:46.350790    8376 start.go:369] acquired machines lock for "kubernetes-upgrade-398700" in 1m32.4460687s
	I0229 03:12:46.351336    8376 start.go:96] Skipping create...Using existing machine configuration
	I0229 03:12:46.351391    8376 fix.go:54] fixHost starting: 
	I0229 03:12:46.351711    8376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-398700 ).state
	I0229 03:12:48.395438    8376 main.go:141] libmachine: [stdout =====>] : Off
	
	I0229 03:12:48.395438    8376 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:12:48.395438    8376 fix.go:102] recreateIfNeeded on kubernetes-upgrade-398700: state=Stopped err=<nil>
	W0229 03:12:48.395438    8376 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 03:12:48.397260    8376 out.go:177] * Restarting existing hyperv VM for "kubernetes-upgrade-398700" ...
	I0229 03:12:48.397783    8376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM kubernetes-upgrade-398700
	I0229 03:12:59.975624    8376 main.go:141] libmachine: [stdout =====>] : 
	E0229 03:12:59.975624    8376 main.go:137] libmachine: [stderr =====>] : Hyper-V\Start-VM : 'kubernetes-upgrade-398700' failed to start.
	Could not initialize memory: There is not enough space on the disk. (0x80070070).
	The Virtual Machine 'kubernetes-upgrade-398700' failed to start because there is not enough disk space.
	'kubernetes-upgrade-398700' failed to start. (Virtual machine ID 290A6ADB-2C2F-4851-AB61-355CAD5D4EA6)
	'kubernetes-upgrade-398700' could not initialize memory: There is not enough space on the disk. (0x80070070). (Virtual 
	machine ID 290A6ADB-2C2F-4851-AB61-355CAD5D4EA6)
	The Virtual Machine 'kubernetes-upgrade-398700' failed to start because there is not enough disk space. The system was 
	unable to create the memory contents file on 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kubern
	etes-upgrade-398700\kubernetes-upgrade-398700\Virtual Machines\290A6ADB-2C2F-4851-AB61-355CAD5D4EA6.VMRS' with the 
	size of 2200 MB. Set the path to a disk with more storage space or delete unnecessary files from the disk and try 
	again. (Virtual machine ID 290A6ADB-2C2F-4851-AB61-355CAD5D4EA6)
	At line:1 char:1
	+ Hyper-V\Start-VM kubernetes-upgrade-398700
	+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
	    + CategoryInfo          : NotSpecified: (:) [Start-VM], VirtualizationException
	    + FullyQualifiedErrorId : OperationFailed,Microsoft.HyperV.PowerShell.Commands.StartVM
	 
	
	I0229 03:12:59.975624    8376 fix.go:56] fixHost completed within 13.6234767s
	I0229 03:12:59.975624    8376 start.go:83] releasing machines lock for "kubernetes-upgrade-398700", held for 13.6240778s
	W0229 03:12:59.975624    8376 start.go:694] error starting host: driver start: exit status 1
	W0229 03:12:59.976626    8376 out.go:239] ! StartHost failed, but will try again: driver start: exit status 1
	! StartHost failed, but will try again: driver start: exit status 1
	I0229 03:12:59.976626    8376 start.go:709] Will try again in 5 seconds ...
	I0229 03:13:04.985296    8376 start.go:365] acquiring machines lock for kubernetes-upgrade-398700: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 03:13:04.985773    8376 start.go:369] acquired machines lock for "kubernetes-upgrade-398700" in 256.1µs
	I0229 03:13:04.985987    8376 start.go:96] Skipping create...Using existing machine configuration
	I0229 03:13:04.985987    8376 fix.go:54] fixHost starting: 
	I0229 03:13:04.986174    8376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-398700 ).state
	I0229 03:13:07.081498    8376 main.go:141] libmachine: [stdout =====>] : Off
	
	I0229 03:13:07.081764    8376 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:13:07.081861    8376 fix.go:102] recreateIfNeeded on kubernetes-upgrade-398700: state=Stopped err=<nil>
	W0229 03:13:07.081861    8376 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 03:13:07.103986    8376 out.go:177] * Restarting existing hyperv VM for "kubernetes-upgrade-398700" ...
	I0229 03:13:07.105134    8376 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM kubernetes-upgrade-398700
	I0229 03:13:09.958313    8376 main.go:141] libmachine: [stdout =====>] : 
	E0229 03:13:09.958313    8376 main.go:137] libmachine: [stderr =====>] : Hyper-V\Start-VM : 'kubernetes-upgrade-398700' failed to start.
	Could not initialize memory: There is not enough space on the disk. (0x80070070).
	The Virtual Machine 'kubernetes-upgrade-398700' failed to start because there is not enough disk space.
	'kubernetes-upgrade-398700' failed to start. (Virtual machine ID 290A6ADB-2C2F-4851-AB61-355CAD5D4EA6)
	'kubernetes-upgrade-398700' could not initialize memory: There is not enough space on the disk. (0x80070070). (Virtual 
	machine ID 290A6ADB-2C2F-4851-AB61-355CAD5D4EA6)
	The Virtual Machine 'kubernetes-upgrade-398700' failed to start because there is not enough disk space. The system was 
	unable to create the memory contents file on 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kubern
	etes-upgrade-398700\kubernetes-upgrade-398700\Virtual Machines\290A6ADB-2C2F-4851-AB61-355CAD5D4EA6.VMRS' with the 
	size of 2200 MB. Set the path to a disk with more storage space or delete unnecessary files from the disk and try 
	again. (Virtual machine ID 290A6ADB-2C2F-4851-AB61-355CAD5D4EA6)
	At line:1 char:1
	+ Hyper-V\Start-VM kubernetes-upgrade-398700
	+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
	    + CategoryInfo          : NotSpecified: (:) [Start-VM], VirtualizationException
	    + FullyQualifiedErrorId : OperationFailed,Microsoft.HyperV.PowerShell.Commands.StartVM
	 
	
	I0229 03:13:09.958313    8376 fix.go:56] fixHost completed within 4.9720509s
	I0229 03:13:09.958313    8376 start.go:83] releasing machines lock for "kubernetes-upgrade-398700", held for 4.972264s
	W0229 03:13:09.958313    8376 out.go:239] * Failed to start hyperv VM. Running "minikube delete -p kubernetes-upgrade-398700" may fix it: driver start: exit status 1
	* Failed to start hyperv VM. Running "minikube delete -p kubernetes-upgrade-398700" may fix it: driver start: exit status 1
	I0229 03:13:09.959767    8376 out.go:177] 
	W0229 03:13:09.960376    8376 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: exit status 1
	W0229 03:13:09.960376    8376 out.go:239] * 
	* 
	W0229 03:13:09.961967    8376 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 03:13:09.962716    8376 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-398700 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperv : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-398700 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-398700 version --output=json: exit status 1 (143.182ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-398700" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-02-29 03:13:10.2042954 +0000 UTC m=+9049.180651501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-398700 -n kubernetes-upgrade-398700
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-398700 -n kubernetes-upgrade-398700: exit status 7 (2.4638803s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 03:13:10.308981    8460 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-398700" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-398700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-398700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-398700: (26.1644004s)
--- FAIL: TestKubernetesUpgrade (787.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (307.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-419900 --driver=hyperv
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-419900 --driver=hyperv: exit status 1 (4m59.7723731s)

                                                
                                                
-- stdout --
	* [NoKubernetes-419900] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18063
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting control plane node NoKubernetes-419900 in cluster NoKubernetes-419900
	* Creating hyperv VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 02:54:53.407510   13412 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-419900 --driver=hyperv" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-419900 -n NoKubernetes-419900
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-419900 -n NoKubernetes-419900: exit status 7 (7.2673663s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 02:59:53.199513    7380 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0229 03:00:00.308988    7380 status.go:352] failed to get driver ip: getting IP: IP not found
	E0229 03:00:00.308988    7380 status.go:249] status error: getting IP: IP not found

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-419900" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (307.04s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (567.99s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-783900 --alsologtostderr -v=1 --driver=hyperv
E0229 03:24:12.290527    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
pause_test.go:92: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p pause-783900 --alsologtostderr -v=1 --driver=hyperv: exit status 90 (6m13.920577s)

                                                
                                                
-- stdout --
	* [pause-783900] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18063
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting control plane node pause-783900 in cluster pause-783900
	* Updating the running hyperv "pause-783900" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 03:22:12.798489   13892 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0229 03:22:12.851234   13892 out.go:291] Setting OutFile to fd 1988 ...
	I0229 03:22:12.852231   13892 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 03:22:12.852231   13892 out.go:304] Setting ErrFile to fd 1856...
	I0229 03:22:12.852231   13892 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 03:22:12.871234   13892 out.go:298] Setting JSON to false
	I0229 03:22:12.874232   13892 start.go:129] hostinfo: {"hostname":"minikube5","uptime":273159,"bootTime":1708903773,"procs":203,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0229 03:22:12.874232   13892 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 03:22:12.875247   13892 out.go:177] * [pause-783900] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0229 03:22:12.875247   13892 notify.go:220] Checking for updates...
	I0229 03:22:12.876241   13892 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 03:22:12.876241   13892 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 03:22:12.877240   13892 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0229 03:22:12.878233   13892 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 03:22:12.879227   13892 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 03:22:12.880229   13892 config.go:182] Loaded profile config "pause-783900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 03:22:12.881234   13892 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 03:22:18.162028   13892 out.go:177] * Using the hyperv driver based on existing profile
	I0229 03:22:18.162716   13892 start.go:299] selected driver: hyperv
	I0229 03:22:18.162716   13892 start.go:903] validating driver "hyperv" against &{Name:pause-783900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.4 ClusterName:pause-783900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.19.15.194 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-
installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 03:22:18.162817   13892 start.go:914] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 03:22:18.218124   13892 cni.go:84] Creating CNI manager for ""
	I0229 03:22:18.218124   13892 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0229 03:22:18.218124   13892 start_flags.go:323] config:
	{Name:pause-783900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:pause-783900 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.19.15.194 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:fal
se portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 03:22:18.218124   13892 iso.go:125] acquiring lock: {Name:mk91f2ee29fbed5605669750e8cfa308a1229357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 03:22:18.220124   13892 out.go:177] * Starting control plane node pause-783900 in cluster pause-783900
	I0229 03:22:18.220124   13892 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 03:22:18.221126   13892 preload.go:148] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0229 03:22:18.221126   13892 cache.go:56] Caching tarball of preloaded images
	I0229 03:22:18.221126   13892 preload.go:174] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 03:22:18.221126   13892 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0229 03:22:18.221126   13892 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\pause-783900\config.json ...
	I0229 03:22:18.223127   13892 start.go:365] acquiring machines lock for pause-783900: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 03:26:02.302204   13892 start.go:369] acquired machines lock for "pause-783900" in 3m44.0664754s
	I0229 03:26:02.302613   13892 start.go:96] Skipping create...Using existing machine configuration
	I0229 03:26:02.302613   13892 fix.go:54] fixHost starting: 
	I0229 03:26:02.303470   13892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-783900 ).state
	I0229 03:26:04.697215   13892 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 03:26:04.697215   13892 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:26:04.697215   13892 fix.go:102] recreateIfNeeded on pause-783900: state=Running err=<nil>
	W0229 03:26:04.697215   13892 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 03:26:04.698317   13892 out.go:177] * Updating the running hyperv "pause-783900" VM ...
	I0229 03:26:04.699093   13892 machine.go:88] provisioning docker machine ...
	I0229 03:26:04.699186   13892 buildroot.go:166] provisioning hostname "pause-783900"
	I0229 03:26:04.699355   13892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-783900 ).state
	I0229 03:26:07.137204   13892 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 03:26:07.137204   13892 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:26:07.137741   13892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-783900 ).networkadapters[0]).ipaddresses[0]
	I0229 03:26:09.921107   13892 main.go:141] libmachine: [stdout =====>] : 172.19.15.194
	
	I0229 03:26:09.921107   13892 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:26:09.924668   13892 main.go:141] libmachine: Using SSH client type: native
	I0229 03:26:09.925371   13892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.15.194 22 <nil> <nil>}
	I0229 03:26:09.925371   13892 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-783900 && echo "pause-783900" | sudo tee /etc/hostname
	I0229 03:26:10.109588   13892 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-783900
	
	I0229 03:26:10.109588   13892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-783900 ).state
	I0229 03:26:12.320483   13892 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 03:26:12.320483   13892 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:26:12.320483   13892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-783900 ).networkadapters[0]).ipaddresses[0]
	I0229 03:26:15.011898   13892 main.go:141] libmachine: [stdout =====>] : 172.19.15.194
	
	I0229 03:26:15.011898   13892 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:26:15.016245   13892 main.go:141] libmachine: Using SSH client type: native
	I0229 03:26:15.016245   13892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.15.194 22 <nil> <nil>}
	I0229 03:26:15.016245   13892 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-783900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-783900/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-783900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 03:26:15.163152   13892 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 03:26:15.163234   13892 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0229 03:26:15.163365   13892 buildroot.go:174] setting up certificates
	I0229 03:26:15.163413   13892 provision.go:83] configureAuth start
	I0229 03:26:15.163413   13892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-783900 ).state
	I0229 03:26:17.317193   13892 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 03:26:17.317193   13892 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:26:17.317193   13892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-783900 ).networkadapters[0]).ipaddresses[0]
	I0229 03:26:20.137865   13892 main.go:141] libmachine: [stdout =====>] : 172.19.15.194
	
	I0229 03:26:20.137960   13892 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:26:20.137960   13892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-783900 ).state
	I0229 03:26:22.591427   13892 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 03:26:22.591427   13892 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:26:22.591509   13892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-783900 ).networkadapters[0]).ipaddresses[0]
	I0229 03:26:25.591235   13892 main.go:141] libmachine: [stdout =====>] : 172.19.15.194
	
	I0229 03:26:25.591235   13892 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:26:25.591235   13892 provision.go:138] copyHostCerts
	I0229 03:26:25.592243   13892 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0229 03:26:25.592243   13892 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0229 03:26:25.592243   13892 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0229 03:26:25.593241   13892 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0229 03:26:25.593241   13892 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0229 03:26:25.594239   13892 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0229 03:26:25.595238   13892 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0229 03:26:25.595238   13892 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0229 03:26:25.595238   13892 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1675 bytes)
	I0229 03:26:25.596241   13892 provision.go:112] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.pause-783900 san=[172.19.15.194 172.19.15.194 localhost 127.0.0.1 minikube pause-783900]
	I0229 03:26:25.938116   13892 provision.go:172] copyRemoteCerts
	I0229 03:26:25.951168   13892 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 03:26:25.951168   13892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-783900 ).state
	I0229 03:26:28.278534   13892 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 03:26:28.278534   13892 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:26:28.278534   13892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-783900 ).networkadapters[0]).ipaddresses[0]
	I0229 03:26:30.802870   13892 main.go:141] libmachine: [stdout =====>] : 172.19.15.194
	
	I0229 03:26:30.802870   13892 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:26:30.804401   13892 sshutil.go:53] new ssh client: &{IP:172.19.15.194 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\pause-783900\id_rsa Username:docker}
	I0229 03:26:30.922581   13892 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9711362s)
	I0229 03:26:30.923191   13892 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 03:26:30.986247   13892 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1212 bytes)
	I0229 03:26:31.042849   13892 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 03:26:31.095597   13892 provision.go:86] duration metric: configureAuth took 15.9312968s
	I0229 03:26:31.095715   13892 buildroot.go:189] setting minikube options for container-runtime
	I0229 03:26:31.096229   13892 config.go:182] Loaded profile config "pause-783900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 03:26:31.096376   13892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-783900 ).state
	I0229 03:26:33.155359   13892 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 03:26:33.156101   13892 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:26:33.156177   13892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-783900 ).networkadapters[0]).ipaddresses[0]
	I0229 03:26:35.639668   13892 main.go:141] libmachine: [stdout =====>] : 172.19.15.194
	
	I0229 03:26:35.639668   13892 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:26:35.644867   13892 main.go:141] libmachine: Using SSH client type: native
	I0229 03:26:35.644867   13892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.15.194 22 <nil> <nil>}
	I0229 03:26:35.644867   13892 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 03:26:35.786345   13892 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0229 03:26:35.786345   13892 buildroot.go:70] root file system type: tmpfs
	I0229 03:26:35.786511   13892 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 03:26:35.786511   13892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-783900 ).state
	I0229 03:26:37.906045   13892 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 03:26:37.906045   13892 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:26:37.906121   13892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-783900 ).networkadapters[0]).ipaddresses[0]
	I0229 03:26:40.521301   13892 main.go:141] libmachine: [stdout =====>] : 172.19.15.194
	
	I0229 03:26:40.522243   13892 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:26:40.526840   13892 main.go:141] libmachine: Using SSH client type: native
	I0229 03:26:40.527401   13892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.15.194 22 <nil> <nil>}
	I0229 03:26:40.527458   13892 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 03:26:40.688472   13892 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 03:26:40.688472   13892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-783900 ).state
	I0229 03:26:42.899492   13892 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 03:26:42.900100   13892 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:26:42.900246   13892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-783900 ).networkadapters[0]).ipaddresses[0]
	I0229 03:26:45.536490   13892 main.go:141] libmachine: [stdout =====>] : 172.19.15.194
	
	I0229 03:26:45.536490   13892 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:26:45.540965   13892 main.go:141] libmachine: Using SSH client type: native
	I0229 03:26:45.541641   13892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.15.194 22 <nil> <nil>}
	I0229 03:26:45.541641   13892 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 03:26:45.688023   13892 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 03:26:45.688023   13892 machine.go:91] provisioned docker machine in 40.9865566s
	I0229 03:26:45.688023   13892 start.go:300] post-start starting for "pause-783900" (driver="hyperv")
	I0229 03:26:45.688023   13892 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 03:26:45.704695   13892 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 03:26:45.704695   13892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-783900 ).state
	I0229 03:26:47.881761   13892 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 03:26:47.881837   13892 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:26:47.881837   13892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-783900 ).networkadapters[0]).ipaddresses[0]
	I0229 03:26:50.427968   13892 main.go:141] libmachine: [stdout =====>] : 172.19.15.194
	
	I0229 03:26:50.428834   13892 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:26:50.429200   13892 sshutil.go:53] new ssh client: &{IP:172.19.15.194 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\pause-783900\id_rsa Username:docker}
	I0229 03:26:50.528267   13892 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8233044s)
	I0229 03:26:50.537311   13892 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 03:26:50.545377   13892 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 03:26:50.545377   13892 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0229 03:26:50.545775   13892 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0229 03:26:50.546447   13892 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem -> 33122.pem in /etc/ssl/certs
	I0229 03:26:50.555565   13892 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 03:26:50.574566   13892 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\33122.pem --> /etc/ssl/certs/33122.pem (1708 bytes)
	I0229 03:26:50.625561   13892 start.go:303] post-start completed in 4.9372636s
	I0229 03:26:50.625561   13892 fix.go:56] fixHost completed within 48.3202596s
	I0229 03:26:50.625561   13892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-783900 ).state
	I0229 03:26:52.763739   13892 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 03:26:52.763739   13892 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:26:52.763739   13892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-783900 ).networkadapters[0]).ipaddresses[0]
	I0229 03:26:55.313776   13892 main.go:141] libmachine: [stdout =====>] : 172.19.15.194
	
	I0229 03:26:55.313776   13892 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:26:55.317787   13892 main.go:141] libmachine: Using SSH client type: native
	I0229 03:26:55.318480   13892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.15.194 22 <nil> <nil>}
	I0229 03:26:55.318480   13892 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0229 03:26:55.451920   13892 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709177215.620158760
	
	I0229 03:26:55.451920   13892 fix.go:206] guest clock: 1709177215.620158760
	I0229 03:26:55.451920   13892 fix.go:219] Guest: 2024-02-29 03:26:55.62015876 +0000 UTC Remote: 2024-02-29 03:26:50.6255616 +0000 UTC m=+277.895194701 (delta=4.99459716s)
	I0229 03:26:55.451920   13892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-783900 ).state
	I0229 03:26:57.541928   13892 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 03:26:57.542024   13892 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:26:57.542218   13892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-783900 ).networkadapters[0]).ipaddresses[0]
	I0229 03:27:00.028572   13892 main.go:141] libmachine: [stdout =====>] : 172.19.15.194
	
	I0229 03:27:00.028572   13892 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:27:00.032699   13892 main.go:141] libmachine: Using SSH client type: native
	I0229 03:27:00.033274   13892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1079d80] 0x107c960 <nil>  [] 0s} 172.19.15.194 22 <nil> <nil>}
	I0229 03:27:00.033274   13892 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709177215
	I0229 03:27:00.176820   13892 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Feb 29 03:26:55 UTC 2024
	
	I0229 03:27:00.176820   13892 fix.go:226] clock set: Thu Feb 29 03:26:55 UTC 2024
	 (err=<nil>)
	I0229 03:27:00.176898   13892 start.go:83] releasing machines lock for "pause-783900", held for 57.8713008s
	I0229 03:27:00.177118   13892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-783900 ).state
	I0229 03:27:02.452774   13892 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 03:27:02.452774   13892 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:27:02.452774   13892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-783900 ).networkadapters[0]).ipaddresses[0]
	I0229 03:27:05.155192   13892 main.go:141] libmachine: [stdout =====>] : 172.19.15.194
	
	I0229 03:27:05.155192   13892 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:27:05.158196   13892 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 03:27:05.159234   13892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-783900 ).state
	I0229 03:27:05.168253   13892 ssh_runner.go:195] Run: cat /version.json
	I0229 03:27:05.168253   13892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-783900 ).state
	I0229 03:27:07.488293   13892 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 03:27:07.488293   13892 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:27:07.488293   13892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-783900 ).networkadapters[0]).ipaddresses[0]
	I0229 03:27:07.545073   13892 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 03:27:07.545073   13892 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:27:07.546076   13892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-783900 ).networkadapters[0]).ipaddresses[0]
	I0229 03:27:10.459645   13892 main.go:141] libmachine: [stdout =====>] : 172.19.15.194
	
	I0229 03:27:10.459645   13892 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:27:10.460831   13892 sshutil.go:53] new ssh client: &{IP:172.19.15.194 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\pause-783900\id_rsa Username:docker}
	I0229 03:27:10.555862   13892 main.go:141] libmachine: [stdout =====>] : 172.19.15.194
	
	I0229 03:27:10.555943   13892 main.go:141] libmachine: [stderr =====>] : 
	I0229 03:27:10.556418   13892 sshutil.go:53] new ssh client: &{IP:172.19.15.194 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\pause-783900\id_rsa Username:docker}
	I0229 03:27:12.567451   13892 ssh_runner.go:235] Completed: cat /version.json: (7.3987867s)
	I0229 03:27:12.567451   13892 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (7.4088425s)
	W0229 03:27:12.567451   13892 start.go:843] [curl -sS -m 2 https://registry.k8s.io/] failed: curl -sS -m 2 https://registry.k8s.io/: Process exited with status 28
	stdout:
	
	stderr:
	curl: (28) Resolving timed out after 2000 milliseconds
	W0229 03:27:12.567451   13892 out.go:239] ! This VM is having trouble accessing https://registry.k8s.io
	! This VM is having trouble accessing https://registry.k8s.io
	W0229 03:27:12.567451   13892 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0229 03:27:12.582434   13892 ssh_runner.go:195] Run: systemctl --version
	I0229 03:27:12.606483   13892 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 03:27:12.617796   13892 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 03:27:12.634098   13892 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 03:27:12.657752   13892 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0229 03:27:12.657752   13892 start.go:475] detecting cgroup driver to use...
	I0229 03:27:12.657752   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 03:27:12.726382   13892 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0229 03:27:12.771436   13892 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 03:27:12.806386   13892 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 03:27:12.815379   13892 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 03:27:12.857388   13892 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 03:27:12.902384   13892 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 03:27:12.952392   13892 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 03:27:12.990400   13892 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 03:27:13.028394   13892 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 03:27:13.065429   13892 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 03:27:13.105747   13892 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 03:27:13.141726   13892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 03:27:13.497435   13892 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 03:27:13.533453   13892 start.go:475] detecting cgroup driver to use...
	I0229 03:27:13.545433   13892 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 03:27:13.592443   13892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 03:27:13.630671   13892 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 03:27:13.689678   13892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 03:27:13.737673   13892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 03:27:13.782879   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 03:27:13.843873   13892 ssh_runner.go:195] Run: which cri-dockerd
	I0229 03:27:13.860919   13892 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 03:27:13.886521   13892 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0229 03:27:13.933513   13892 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 03:27:14.298553   13892 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 03:27:14.644774   13892 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 03:27:14.644774   13892 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 03:27:14.704175   13892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 03:27:15.098216   13892 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 03:28:26.515049   13892 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.4128621s)
	I0229 03:28:26.527562   13892 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0229 03:28:26.591567   13892 out.go:177] 
	W0229 03:28:26.591567   13892 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Feb 29 03:20:49 pause-783900 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 03:20:49 pause-783900 dockerd[645]: time="2024-02-29T03:20:49.740897581Z" level=info msg="Starting up"
	Feb 29 03:20:49 pause-783900 dockerd[645]: time="2024-02-29T03:20:49.741704741Z" level=info msg="containerd not running, starting managed containerd"
	Feb 29 03:20:49 pause-783900 dockerd[645]: time="2024-02-29T03:20:49.742716242Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=651
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.776222688Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.805344265Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.805437783Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.805562008Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.805581112Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.805810757Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.805917078Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.806143623Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.806253045Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.806273249Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.806285751Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.806381170Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.806799153Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.809961680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.810139416Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.810321352Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.810439975Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.810846356Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.811005688Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.811117310Z" level=info msg="metadata content store policy set" policy=shared
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.820411653Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.820574586Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.820601591Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.820694109Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.820714413Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.821000870Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.821311132Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.821554180Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.821674904Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.821698508Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.821713812Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.821735016Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.821748818Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.821764422Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.821779825Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.821793427Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.821806830Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.821823033Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.821847438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.821864341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.821878944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.821895348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.821910351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.821924853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.821939256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.821953459Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.821968162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.821986366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.822007970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.822022873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.822036776Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.822052979Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.822077884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.822091086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.822103289Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.822169002Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.822201608Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.822298628Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.822312030Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.822414150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.822441556Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.822459359Z" level=info msg="NRI interface is disabled by configuration."
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.822827632Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.822894446Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.822932053Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.822948456Z" level=info msg="containerd successfully booted in 0.048183s"
	Feb 29 03:20:49 pause-783900 dockerd[645]: time="2024-02-29T03:20:49.852835685Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 29 03:20:49 pause-783900 dockerd[645]: time="2024-02-29T03:20:49.869477686Z" level=info msg="Loading containers: start."
	Feb 29 03:20:50 pause-783900 dockerd[645]: time="2024-02-29T03:20:50.116465882Z" level=info msg="Loading containers: done."
	Feb 29 03:20:50 pause-783900 dockerd[645]: time="2024-02-29T03:20:50.130564472Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Feb 29 03:20:50 pause-783900 dockerd[645]: time="2024-02-29T03:20:50.130764710Z" level=info msg="Daemon has completed initialization"
	Feb 29 03:20:50 pause-783900 dockerd[645]: time="2024-02-29T03:20:50.182082500Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 29 03:20:50 pause-783900 systemd[1]: Started Docker Application Container Engine.
	Feb 29 03:20:50 pause-783900 dockerd[645]: time="2024-02-29T03:20:50.182870750Z" level=info msg="API listen on [::]:2376"
	Feb 29 03:21:20 pause-783900 dockerd[645]: time="2024-02-29T03:21:20.029864076Z" level=info msg="Processing signal 'terminated'"
	Feb 29 03:21:20 pause-783900 dockerd[645]: time="2024-02-29T03:21:20.031309334Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Feb 29 03:21:20 pause-783900 dockerd[645]: time="2024-02-29T03:21:20.031922059Z" level=info msg="Daemon shutdown complete"
	Feb 29 03:21:20 pause-783900 dockerd[645]: time="2024-02-29T03:21:20.031983962Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Feb 29 03:21:20 pause-783900 dockerd[645]: time="2024-02-29T03:21:20.032037864Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Feb 29 03:21:20 pause-783900 systemd[1]: Stopping Docker Application Container Engine...
	Feb 29 03:21:21 pause-783900 systemd[1]: docker.service: Deactivated successfully.
	Feb 29 03:21:21 pause-783900 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 03:21:21 pause-783900 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 03:21:21 pause-783900 dockerd[988]: time="2024-02-29T03:21:21.109544927Z" level=info msg="Starting up"
	Feb 29 03:21:21 pause-783900 dockerd[988]: time="2024-02-29T03:21:21.110922883Z" level=info msg="containerd not running, starting managed containerd"
	Feb 29 03:21:21 pause-783900 dockerd[988]: time="2024-02-29T03:21:21.118063973Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=994
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.145735997Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.174457063Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.174541066Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.174594969Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.174618370Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.174656571Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.174689373Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.175009786Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.175056787Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.175078588Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.175091689Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.175123290Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.175405802Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.184871286Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.184982491Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.185164798Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.185229501Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.185256702Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.185280603Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.185301904Z" level=info msg="metadata content store policy set" policy=shared
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.185574115Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.185709320Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.185730621Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.185746722Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.185764122Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.185817824Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.186525753Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.186608657Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.186625957Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.186649958Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.186672359Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.186688560Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.186710361Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.186732562Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.186748962Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.186762763Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.186779064Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.186791464Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.186826065Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.186859567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.186873567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.186887668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.186907969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.186924469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.186936870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.186951571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.186968471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.187007573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.187125478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.187145678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.187160979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.187225882Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.187264083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.187281184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.187293884Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.187376988Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.187516393Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.187534294Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.187546395Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.187618998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.187635698Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.187646899Z" level=info msg="NRI interface is disabled by configuration."
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.187984012Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.188140819Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.188235923Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.188347327Z" level=info msg="containerd successfully booted in 0.043821s"
	Feb 29 03:21:21 pause-783900 dockerd[988]: time="2024-02-29T03:21:21.216469669Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 29 03:21:21 pause-783900 dockerd[988]: time="2024-02-29T03:21:21.225529537Z" level=info msg="Loading containers: start."
	Feb 29 03:21:21 pause-783900 dockerd[988]: time="2024-02-29T03:21:21.407886444Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 29 03:21:21 pause-783900 dockerd[988]: time="2024-02-29T03:21:21.479689860Z" level=info msg="Loading containers: done."
	Feb 29 03:21:21 pause-783900 dockerd[988]: time="2024-02-29T03:21:21.494255752Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Feb 29 03:21:21 pause-783900 dockerd[988]: time="2024-02-29T03:21:21.494412758Z" level=info msg="Daemon has completed initialization"
	Feb 29 03:21:21 pause-783900 dockerd[988]: time="2024-02-29T03:21:21.536377362Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 29 03:21:21 pause-783900 dockerd[988]: time="2024-02-29T03:21:21.536504068Z" level=info msg="API listen on [::]:2376"
	Feb 29 03:21:21 pause-783900 systemd[1]: Started Docker Application Container Engine.
	Feb 29 03:21:36 pause-783900 systemd[1]: Stopping Docker Application Container Engine...
	Feb 29 03:21:36 pause-783900 dockerd[988]: time="2024-02-29T03:21:36.340051512Z" level=info msg="Processing signal 'terminated'"
	Feb 29 03:21:36 pause-783900 dockerd[988]: time="2024-02-29T03:21:36.342741121Z" level=info msg="Daemon shutdown complete"
	Feb 29 03:21:36 pause-783900 dockerd[988]: time="2024-02-29T03:21:36.343034633Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Feb 29 03:21:36 pause-783900 dockerd[988]: time="2024-02-29T03:21:36.343519853Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Feb 29 03:21:36 pause-783900 dockerd[988]: time="2024-02-29T03:21:36.343557454Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Feb 29 03:21:37 pause-783900 systemd[1]: docker.service: Deactivated successfully.
	Feb 29 03:21:37 pause-783900 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 03:21:37 pause-783900 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 03:21:37 pause-783900 dockerd[1290]: time="2024-02-29T03:21:37.427740888Z" level=info msg="Starting up"
	Feb 29 03:21:37 pause-783900 dockerd[1290]: time="2024-02-29T03:21:37.428743429Z" level=info msg="containerd not running, starting managed containerd"
	Feb 29 03:21:37 pause-783900 dockerd[1290]: time="2024-02-29T03:21:37.430226989Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1297
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.465043803Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.494675107Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.494808312Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.494855114Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.494870415Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.494900716Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.494919017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.495102624Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.495125125Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.495139426Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.495151326Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.495175227Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.495342734Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.498723171Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.498854077Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.499042984Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.499158489Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.499257693Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.499458201Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.499571406Z" level=info msg="metadata content store policy set" policy=shared
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.499738813Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.499882018Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.499996223Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.500018624Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.500036425Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.500091827Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.500732753Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.500907460Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501023565Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501045466Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501064366Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501082167Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501098368Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501117769Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501135669Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501151770Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501167871Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501222073Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501269275Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501289676Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501307976Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501324877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501340278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501587988Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501705692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501732394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501749894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501768295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501785096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501800996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501816997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501836098Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501860699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501877699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501893200Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.502067407Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.502175612Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.502242314Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.502256215Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.502323118Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.502445723Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.502487324Z" level=info msg="NRI interface is disabled by configuration."
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.502732534Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.502923742Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.503077848Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.503572768Z" level=info msg="containerd successfully booted in 0.041610s"
	Feb 29 03:21:37 pause-783900 dockerd[1290]: time="2024-02-29T03:21:37.532456241Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 29 03:21:37 pause-783900 dockerd[1290]: time="2024-02-29T03:21:37.742776484Z" level=info msg="Loading containers: start."
	Feb 29 03:21:37 pause-783900 dockerd[1290]: time="2024-02-29T03:21:37.908750925Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 29 03:21:37 pause-783900 dockerd[1290]: time="2024-02-29T03:21:37.989221793Z" level=info msg="Loading containers: done."
	Feb 29 03:21:38 pause-783900 dockerd[1290]: time="2024-02-29T03:21:38.009659023Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Feb 29 03:21:38 pause-783900 dockerd[1290]: time="2024-02-29T03:21:38.009857631Z" level=info msg="Daemon has completed initialization"
	Feb 29 03:21:38 pause-783900 dockerd[1290]: time="2024-02-29T03:21:38.054740754Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 29 03:21:38 pause-783900 dockerd[1290]: time="2024-02-29T03:21:38.054862459Z" level=info msg="API listen on [::]:2376"
	Feb 29 03:21:38 pause-783900 systemd[1]: Started Docker Application Container Engine.
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.146412363Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.146864392Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.147022102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.147391225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.162259572Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.162449084Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.162463085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.162858010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.183747940Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.183992756Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.184239472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.184493788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.193768079Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.194382318Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.194570530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.194887450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.573604766Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.573685471Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.573709373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.573842081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.596304312Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.596805244Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.596963554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.597604194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.627875722Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.627940926Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.627961028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.628288048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.636697884Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.637110210Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.637137512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.637338025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:22:07 pause-783900 dockerd[1297]: time="2024-02-29T03:22:07.403110012Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:22:07 pause-783900 dockerd[1297]: time="2024-02-29T03:22:07.403250518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:22:07 pause-783900 dockerd[1297]: time="2024-02-29T03:22:07.403265219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:22:07 pause-783900 dockerd[1297]: time="2024-02-29T03:22:07.404003954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:22:07 pause-783900 dockerd[1297]: time="2024-02-29T03:22:07.676955671Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:22:07 pause-783900 dockerd[1297]: time="2024-02-29T03:22:07.677445494Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:22:07 pause-783900 dockerd[1297]: time="2024-02-29T03:22:07.678513644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:22:07 pause-783900 dockerd[1297]: time="2024-02-29T03:22:07.678892462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:22:09 pause-783900 dockerd[1297]: time="2024-02-29T03:22:09.178354943Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:22:09 pause-783900 dockerd[1297]: time="2024-02-29T03:22:09.178453848Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:22:09 pause-783900 dockerd[1297]: time="2024-02-29T03:22:09.178940070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:22:09 pause-783900 dockerd[1297]: time="2024-02-29T03:22:09.179732407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:22:09 pause-783900 dockerd[1297]: time="2024-02-29T03:22:09.203612310Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:22:09 pause-783900 dockerd[1297]: time="2024-02-29T03:22:09.203696013Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:22:09 pause-783900 dockerd[1297]: time="2024-02-29T03:22:09.203711114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:22:09 pause-783900 dockerd[1297]: time="2024-02-29T03:22:09.203824319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:22:09 pause-783900 dockerd[1297]: time="2024-02-29T03:22:09.649711061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:22:09 pause-783900 dockerd[1297]: time="2024-02-29T03:22:09.649817366Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:22:09 pause-783900 dockerd[1297]: time="2024-02-29T03:22:09.649861968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:22:09 pause-783900 dockerd[1297]: time="2024-02-29T03:22:09.649986674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:22:09 pause-783900 dockerd[1297]: time="2024-02-29T03:22:09.731105599Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:22:09 pause-783900 dockerd[1297]: time="2024-02-29T03:22:09.731255406Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:22:09 pause-783900 dockerd[1297]: time="2024-02-29T03:22:09.731290308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:22:09 pause-783900 dockerd[1297]: time="2024-02-29T03:22:09.731525719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:27:15 pause-783900 systemd[1]: Stopping Docker Application Container Engine...
	Feb 29 03:27:15 pause-783900 dockerd[1290]: time="2024-02-29T03:27:15.296645925Z" level=info msg="Processing signal 'terminated'"
	Feb 29 03:27:15 pause-783900 dockerd[1290]: time="2024-02-29T03:27:15.540946483Z" level=info msg="ignoring event" container=3c60f4309764f4a94f8d540061b596d16a805745e7639e0f6432e83e82b96ea4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.543401185Z" level=info msg="shim disconnected" id=3c60f4309764f4a94f8d540061b596d16a805745e7639e0f6432e83e82b96ea4 namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.543842703Z" level=warning msg="cleaning up after shim disconnected" id=3c60f4309764f4a94f8d540061b596d16a805745e7639e0f6432e83e82b96ea4 namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.543877004Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1290]: time="2024-02-29T03:27:15.552249853Z" level=info msg="ignoring event" container=30e9b3ae5d144ee6cd6456404059cf722dc2edfdd8ac8be0f21c1224906faec2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.556111813Z" level=info msg="shim disconnected" id=30e9b3ae5d144ee6cd6456404059cf722dc2edfdd8ac8be0f21c1224906faec2 namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.556164415Z" level=warning msg="cleaning up after shim disconnected" id=30e9b3ae5d144ee6cd6456404059cf722dc2edfdd8ac8be0f21c1224906faec2 namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.556174916Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.558972032Z" level=info msg="shim disconnected" id=e1799cbe5ab6b10fbd28395ed2493a2cef5de149ec4ef5d0afc7dc799ee67042 namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.559155940Z" level=warning msg="cleaning up after shim disconnected" id=e1799cbe5ab6b10fbd28395ed2493a2cef5de149ec4ef5d0afc7dc799ee67042 namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.559265144Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1290]: time="2024-02-29T03:27:15.571090036Z" level=info msg="ignoring event" container=e1799cbe5ab6b10fbd28395ed2493a2cef5de149ec4ef5d0afc7dc799ee67042 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.585139520Z" level=info msg="shim disconnected" id=a6b6fd2089a535e59f60bf6cc48a286cbdc31643ffd2b999640a0a90d71c1d09 namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1290]: time="2024-02-29T03:27:15.585265225Z" level=info msg="ignoring event" container=a6b6fd2089a535e59f60bf6cc48a286cbdc31643ffd2b999640a0a90d71c1d09 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.585471134Z" level=warning msg="cleaning up after shim disconnected" id=a6b6fd2089a535e59f60bf6cc48a286cbdc31643ffd2b999640a0a90d71c1d09 namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.585578938Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1290]: time="2024-02-29T03:27:15.599997038Z" level=info msg="ignoring event" container=a8cc7b255e87e240cf58453b0d2b33fb121229c22bde862abf0c5a5a40a8be29 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:27:15 pause-783900 dockerd[1290]: time="2024-02-29T03:27:15.600077541Z" level=info msg="ignoring event" container=a49bab14dcdde980b9d4530afc6d1c0a5ce8ba92895e886bbb9ec1d1b54826b3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.600565861Z" level=info msg="shim disconnected" id=a49bab14dcdde980b9d4530afc6d1c0a5ce8ba92895e886bbb9ec1d1b54826b3 namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.600761770Z" level=warning msg="cleaning up after shim disconnected" id=a49bab14dcdde980b9d4530afc6d1c0a5ce8ba92895e886bbb9ec1d1b54826b3 namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.601596004Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.602733051Z" level=info msg="shim disconnected" id=a8cc7b255e87e240cf58453b0d2b33fb121229c22bde862abf0c5a5a40a8be29 namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.602813455Z" level=warning msg="cleaning up after shim disconnected" id=a8cc7b255e87e240cf58453b0d2b33fb121229c22bde862abf0c5a5a40a8be29 namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.602833956Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.629395560Z" level=info msg="shim disconnected" id=f77b060c31fc3678d004c594d48868efad1408dddf2eed267aa3f35664c661c7 namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.629608069Z" level=warning msg="cleaning up after shim disconnected" id=f77b060c31fc3678d004c594d48868efad1408dddf2eed267aa3f35664c661c7 namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.629628570Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1290]: time="2024-02-29T03:27:15.629992185Z" level=info msg="ignoring event" container=f77b060c31fc3678d004c594d48868efad1408dddf2eed267aa3f35664c661c7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:27:15 pause-783900 dockerd[1290]: time="2024-02-29T03:27:15.630052587Z" level=info msg="ignoring event" container=056999a155f8d25e47fadcadcb6ed9b567604f767a7c9e2234e8336808779660 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.653964882Z" level=info msg="shim disconnected" id=056999a155f8d25e47fadcadcb6ed9b567604f767a7c9e2234e8336808779660 namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.655278836Z" level=warning msg="cleaning up after shim disconnected" id=056999a155f8d25e47fadcadcb6ed9b567604f767a7c9e2234e8336808779660 namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.655675453Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.669871243Z" level=info msg="shim disconnected" id=3003714e6fe41464a7a6f9b2abf6a1652e015bac7ee993ea0f83601e7d8e00ae namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.670201457Z" level=warning msg="cleaning up after shim disconnected" id=3003714e6fe41464a7a6f9b2abf6a1652e015bac7ee993ea0f83601e7d8e00ae namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1290]: time="2024-02-29T03:27:15.670484768Z" level=info msg="ignoring event" container=0ed420284b5923509db60be7d0e186c7f3571768d98182eb5527904494af70e1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:27:15 pause-783900 dockerd[1290]: time="2024-02-29T03:27:15.670573172Z" level=info msg="ignoring event" container=3003714e6fe41464a7a6f9b2abf6a1652e015bac7ee993ea0f83601e7d8e00ae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:27:15 pause-783900 dockerd[1290]: time="2024-02-29T03:27:15.670595873Z" level=info msg="ignoring event" container=dd84565820872a90fefb3ddbb518cf7d370081ad3314d452bfb7a8a9ccd1c512 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.670556971Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.682809781Z" level=info msg="shim disconnected" id=0ed420284b5923509db60be7d0e186c7f3571768d98182eb5527904494af70e1 namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.683170196Z" level=warning msg="cleaning up after shim disconnected" id=0ed420284b5923509db60be7d0e186c7f3571768d98182eb5527904494af70e1 namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.683326202Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.691867357Z" level=info msg="shim disconnected" id=dd84565820872a90fefb3ddbb518cf7d370081ad3314d452bfb7a8a9ccd1c512 namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.704693791Z" level=warning msg="cleaning up after shim disconnected" id=dd84565820872a90fefb3ddbb518cf7d370081ad3314d452bfb7a8a9ccd1c512 namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.705033705Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 29 03:27:20 pause-783900 dockerd[1297]: time="2024-02-29T03:27:20.493975815Z" level=info msg="shim disconnected" id=b465631830b4c6fe8dd1b9880171515c917ecf95bef44579b35405b9ff01ba57 namespace=moby
	Feb 29 03:27:20 pause-783900 dockerd[1297]: time="2024-02-29T03:27:20.494930454Z" level=warning msg="cleaning up after shim disconnected" id=b465631830b4c6fe8dd1b9880171515c917ecf95bef44579b35405b9ff01ba57 namespace=moby
	Feb 29 03:27:20 pause-783900 dockerd[1290]: time="2024-02-29T03:27:20.495286969Z" level=info msg="ignoring event" container=b465631830b4c6fe8dd1b9880171515c917ecf95bef44579b35405b9ff01ba57 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:27:20 pause-783900 dockerd[1297]: time="2024-02-29T03:27:20.496214808Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 29 03:27:20 pause-783900 dockerd[1297]: time="2024-02-29T03:27:20.521963178Z" level=info msg="shim disconnected" id=fb2b5d637d7c4bab5f2c184e4cb06c36ccb6683cae59c81d1dfcb6c298a4f998 namespace=moby
	Feb 29 03:27:20 pause-783900 dockerd[1297]: time="2024-02-29T03:27:20.522191088Z" level=warning msg="cleaning up after shim disconnected" id=fb2b5d637d7c4bab5f2c184e4cb06c36ccb6683cae59c81d1dfcb6c298a4f998 namespace=moby
	Feb 29 03:27:20 pause-783900 dockerd[1297]: time="2024-02-29T03:27:20.522248990Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 29 03:27:20 pause-783900 dockerd[1290]: time="2024-02-29T03:27:20.522757711Z" level=info msg="ignoring event" container=fb2b5d637d7c4bab5f2c184e4cb06c36ccb6683cae59c81d1dfcb6c298a4f998 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:27:25 pause-783900 dockerd[1290]: time="2024-02-29T03:27:25.450540029Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=ef43c561bf3ce1bad5430555475363a1f11677fad2a8b1aec5929441a829aa37
	Feb 29 03:27:25 pause-783900 dockerd[1290]: time="2024-02-29T03:27:25.508257912Z" level=info msg="ignoring event" container=ef43c561bf3ce1bad5430555475363a1f11677fad2a8b1aec5929441a829aa37 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:27:25 pause-783900 dockerd[1297]: time="2024-02-29T03:27:25.509487080Z" level=info msg="shim disconnected" id=ef43c561bf3ce1bad5430555475363a1f11677fad2a8b1aec5929441a829aa37 namespace=moby
	Feb 29 03:27:25 pause-783900 dockerd[1297]: time="2024-02-29T03:27:25.509889402Z" level=warning msg="cleaning up after shim disconnected" id=ef43c561bf3ce1bad5430555475363a1f11677fad2a8b1aec5929441a829aa37 namespace=moby
	Feb 29 03:27:25 pause-783900 dockerd[1297]: time="2024-02-29T03:27:25.509914704Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 29 03:27:25 pause-783900 dockerd[1290]: time="2024-02-29T03:27:25.582813524Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Feb 29 03:27:25 pause-783900 dockerd[1290]: time="2024-02-29T03:27:25.583093940Z" level=info msg="Daemon shutdown complete"
	Feb 29 03:27:25 pause-783900 dockerd[1290]: time="2024-02-29T03:27:25.584090095Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Feb 29 03:27:25 pause-783900 dockerd[1290]: time="2024-02-29T03:27:25.584109396Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Feb 29 03:27:26 pause-783900 systemd[1]: docker.service: Deactivated successfully.
	Feb 29 03:27:26 pause-783900 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 03:27:26 pause-783900 systemd[1]: docker.service: Consumed 14.662s CPU time.
	Feb 29 03:27:26 pause-783900 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 03:27:26 pause-783900 dockerd[7250]: time="2024-02-29T03:27:26.669670633Z" level=info msg="Starting up"
	Feb 29 03:28:26 pause-783900 dockerd[7250]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Feb 29 03:28:26 pause-783900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Feb 29 03:28:26 pause-783900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Feb 29 03:28:26 pause-783900 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Feb 29 03:20:49 pause-783900 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 03:20:49 pause-783900 dockerd[645]: time="2024-02-29T03:20:49.740897581Z" level=info msg="Starting up"
	Feb 29 03:20:49 pause-783900 dockerd[645]: time="2024-02-29T03:20:49.741704741Z" level=info msg="containerd not running, starting managed containerd"
	Feb 29 03:20:49 pause-783900 dockerd[645]: time="2024-02-29T03:20:49.742716242Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=651
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.776222688Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.805344265Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.805437783Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.805562008Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.805581112Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.805810757Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.805917078Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.806143623Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.806253045Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.806273249Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.806285751Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.806381170Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.806799153Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.809961680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.810139416Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.810321352Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.810439975Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.810846356Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.811005688Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.811117310Z" level=info msg="metadata content store policy set" policy=shared
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.820411653Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.820574586Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.820601591Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.820694109Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.820714413Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.821000870Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.821311132Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.821554180Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.821674904Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.821698508Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.821713812Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.821735016Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.821748818Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.821764422Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.821779825Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.821793427Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.821806830Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.821823033Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.821847438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.821864341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.821878944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.821895348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.821910351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.821924853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.821939256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.821953459Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.821968162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.821986366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.822007970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.822022873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.822036776Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.822052979Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.822077884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.822091086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.822103289Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.822169002Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.822201608Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.822298628Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.822312030Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.822414150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.822441556Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.822459359Z" level=info msg="NRI interface is disabled by configuration."
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.822827632Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.822894446Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.822932053Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Feb 29 03:20:49 pause-783900 dockerd[651]: time="2024-02-29T03:20:49.822948456Z" level=info msg="containerd successfully booted in 0.048183s"
	Feb 29 03:20:49 pause-783900 dockerd[645]: time="2024-02-29T03:20:49.852835685Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 29 03:20:49 pause-783900 dockerd[645]: time="2024-02-29T03:20:49.869477686Z" level=info msg="Loading containers: start."
	Feb 29 03:20:50 pause-783900 dockerd[645]: time="2024-02-29T03:20:50.116465882Z" level=info msg="Loading containers: done."
	Feb 29 03:20:50 pause-783900 dockerd[645]: time="2024-02-29T03:20:50.130564472Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Feb 29 03:20:50 pause-783900 dockerd[645]: time="2024-02-29T03:20:50.130764710Z" level=info msg="Daemon has completed initialization"
	Feb 29 03:20:50 pause-783900 dockerd[645]: time="2024-02-29T03:20:50.182082500Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 29 03:20:50 pause-783900 systemd[1]: Started Docker Application Container Engine.
	Feb 29 03:20:50 pause-783900 dockerd[645]: time="2024-02-29T03:20:50.182870750Z" level=info msg="API listen on [::]:2376"
	Feb 29 03:21:20 pause-783900 dockerd[645]: time="2024-02-29T03:21:20.029864076Z" level=info msg="Processing signal 'terminated'"
	Feb 29 03:21:20 pause-783900 dockerd[645]: time="2024-02-29T03:21:20.031309334Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Feb 29 03:21:20 pause-783900 dockerd[645]: time="2024-02-29T03:21:20.031922059Z" level=info msg="Daemon shutdown complete"
	Feb 29 03:21:20 pause-783900 dockerd[645]: time="2024-02-29T03:21:20.031983962Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Feb 29 03:21:20 pause-783900 dockerd[645]: time="2024-02-29T03:21:20.032037864Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Feb 29 03:21:20 pause-783900 systemd[1]: Stopping Docker Application Container Engine...
	Feb 29 03:21:21 pause-783900 systemd[1]: docker.service: Deactivated successfully.
	Feb 29 03:21:21 pause-783900 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 03:21:21 pause-783900 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 03:21:21 pause-783900 dockerd[988]: time="2024-02-29T03:21:21.109544927Z" level=info msg="Starting up"
	Feb 29 03:21:21 pause-783900 dockerd[988]: time="2024-02-29T03:21:21.110922883Z" level=info msg="containerd not running, starting managed containerd"
	Feb 29 03:21:21 pause-783900 dockerd[988]: time="2024-02-29T03:21:21.118063973Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=994
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.145735997Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.174457063Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.174541066Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.174594969Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.174618370Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.174656571Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.174689373Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.175009786Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.175056787Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.175078588Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.175091689Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.175123290Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.175405802Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.184871286Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.184982491Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.185164798Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.185229501Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.185256702Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.185280603Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.185301904Z" level=info msg="metadata content store policy set" policy=shared
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.185574115Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.185709320Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.185730621Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.185746722Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.185764122Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.185817824Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.186525753Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.186608657Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.186625957Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.186649958Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.186672359Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.186688560Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.186710361Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.186732562Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.186748962Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.186762763Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.186779064Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.186791464Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.186826065Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.186859567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.186873567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.186887668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.186907969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.186924469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.186936870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.186951571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.186968471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.187007573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.187125478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.187145678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.187160979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.187225882Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.187264083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.187281184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.187293884Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.187376988Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.187516393Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.187534294Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.187546395Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.187618998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.187635698Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.187646899Z" level=info msg="NRI interface is disabled by configuration."
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.187984012Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.188140819Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.188235923Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Feb 29 03:21:21 pause-783900 dockerd[994]: time="2024-02-29T03:21:21.188347327Z" level=info msg="containerd successfully booted in 0.043821s"
	Feb 29 03:21:21 pause-783900 dockerd[988]: time="2024-02-29T03:21:21.216469669Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 29 03:21:21 pause-783900 dockerd[988]: time="2024-02-29T03:21:21.225529537Z" level=info msg="Loading containers: start."
	Feb 29 03:21:21 pause-783900 dockerd[988]: time="2024-02-29T03:21:21.407886444Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 29 03:21:21 pause-783900 dockerd[988]: time="2024-02-29T03:21:21.479689860Z" level=info msg="Loading containers: done."
	Feb 29 03:21:21 pause-783900 dockerd[988]: time="2024-02-29T03:21:21.494255752Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Feb 29 03:21:21 pause-783900 dockerd[988]: time="2024-02-29T03:21:21.494412758Z" level=info msg="Daemon has completed initialization"
	Feb 29 03:21:21 pause-783900 dockerd[988]: time="2024-02-29T03:21:21.536377362Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 29 03:21:21 pause-783900 dockerd[988]: time="2024-02-29T03:21:21.536504068Z" level=info msg="API listen on [::]:2376"
	Feb 29 03:21:21 pause-783900 systemd[1]: Started Docker Application Container Engine.
	Feb 29 03:21:36 pause-783900 systemd[1]: Stopping Docker Application Container Engine...
	Feb 29 03:21:36 pause-783900 dockerd[988]: time="2024-02-29T03:21:36.340051512Z" level=info msg="Processing signal 'terminated'"
	Feb 29 03:21:36 pause-783900 dockerd[988]: time="2024-02-29T03:21:36.342741121Z" level=info msg="Daemon shutdown complete"
	Feb 29 03:21:36 pause-783900 dockerd[988]: time="2024-02-29T03:21:36.343034633Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Feb 29 03:21:36 pause-783900 dockerd[988]: time="2024-02-29T03:21:36.343519853Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Feb 29 03:21:36 pause-783900 dockerd[988]: time="2024-02-29T03:21:36.343557454Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Feb 29 03:21:37 pause-783900 systemd[1]: docker.service: Deactivated successfully.
	Feb 29 03:21:37 pause-783900 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 03:21:37 pause-783900 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 03:21:37 pause-783900 dockerd[1290]: time="2024-02-29T03:21:37.427740888Z" level=info msg="Starting up"
	Feb 29 03:21:37 pause-783900 dockerd[1290]: time="2024-02-29T03:21:37.428743429Z" level=info msg="containerd not running, starting managed containerd"
	Feb 29 03:21:37 pause-783900 dockerd[1290]: time="2024-02-29T03:21:37.430226989Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1297
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.465043803Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.494675107Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.494808312Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.494855114Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.494870415Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.494900716Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.494919017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.495102624Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.495125125Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.495139426Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.495151326Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.495175227Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.495342734Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.498723171Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.498854077Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.499042984Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.499158489Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.499257693Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.499458201Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.499571406Z" level=info msg="metadata content store policy set" policy=shared
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.499738813Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.499882018Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.499996223Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.500018624Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.500036425Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.500091827Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.500732753Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.500907460Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501023565Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501045466Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501064366Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501082167Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501098368Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501117769Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501135669Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501151770Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501167871Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501222073Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501269275Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501289676Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501307976Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501324877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501340278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501587988Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501705692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501732394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501749894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501768295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501785096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501800996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501816997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501836098Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501860699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501877699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.501893200Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.502067407Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.502175612Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.502242314Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.502256215Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.502323118Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.502445723Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.502487324Z" level=info msg="NRI interface is disabled by configuration."
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.502732534Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.502923742Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.503077848Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Feb 29 03:21:37 pause-783900 dockerd[1297]: time="2024-02-29T03:21:37.503572768Z" level=info msg="containerd successfully booted in 0.041610s"
	Feb 29 03:21:37 pause-783900 dockerd[1290]: time="2024-02-29T03:21:37.532456241Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 29 03:21:37 pause-783900 dockerd[1290]: time="2024-02-29T03:21:37.742776484Z" level=info msg="Loading containers: start."
	Feb 29 03:21:37 pause-783900 dockerd[1290]: time="2024-02-29T03:21:37.908750925Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 29 03:21:37 pause-783900 dockerd[1290]: time="2024-02-29T03:21:37.989221793Z" level=info msg="Loading containers: done."
	Feb 29 03:21:38 pause-783900 dockerd[1290]: time="2024-02-29T03:21:38.009659023Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Feb 29 03:21:38 pause-783900 dockerd[1290]: time="2024-02-29T03:21:38.009857631Z" level=info msg="Daemon has completed initialization"
	Feb 29 03:21:38 pause-783900 dockerd[1290]: time="2024-02-29T03:21:38.054740754Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 29 03:21:38 pause-783900 dockerd[1290]: time="2024-02-29T03:21:38.054862459Z" level=info msg="API listen on [::]:2376"
	Feb 29 03:21:38 pause-783900 systemd[1]: Started Docker Application Container Engine.
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.146412363Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.146864392Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.147022102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.147391225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.162259572Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.162449084Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.162463085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.162858010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.183747940Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.183992756Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.184239472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.184493788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.193768079Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.194382318Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.194570530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.194887450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.573604766Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.573685471Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.573709373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.573842081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.596304312Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.596805244Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.596963554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.597604194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.627875722Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.627940926Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.627961028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.628288048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.636697884Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.637110210Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.637137512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:21:47 pause-783900 dockerd[1297]: time="2024-02-29T03:21:47.637338025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:22:07 pause-783900 dockerd[1297]: time="2024-02-29T03:22:07.403110012Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:22:07 pause-783900 dockerd[1297]: time="2024-02-29T03:22:07.403250518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:22:07 pause-783900 dockerd[1297]: time="2024-02-29T03:22:07.403265219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:22:07 pause-783900 dockerd[1297]: time="2024-02-29T03:22:07.404003954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:22:07 pause-783900 dockerd[1297]: time="2024-02-29T03:22:07.676955671Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:22:07 pause-783900 dockerd[1297]: time="2024-02-29T03:22:07.677445494Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:22:07 pause-783900 dockerd[1297]: time="2024-02-29T03:22:07.678513644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:22:07 pause-783900 dockerd[1297]: time="2024-02-29T03:22:07.678892462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:22:09 pause-783900 dockerd[1297]: time="2024-02-29T03:22:09.178354943Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:22:09 pause-783900 dockerd[1297]: time="2024-02-29T03:22:09.178453848Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:22:09 pause-783900 dockerd[1297]: time="2024-02-29T03:22:09.178940070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:22:09 pause-783900 dockerd[1297]: time="2024-02-29T03:22:09.179732407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:22:09 pause-783900 dockerd[1297]: time="2024-02-29T03:22:09.203612310Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:22:09 pause-783900 dockerd[1297]: time="2024-02-29T03:22:09.203696013Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:22:09 pause-783900 dockerd[1297]: time="2024-02-29T03:22:09.203711114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:22:09 pause-783900 dockerd[1297]: time="2024-02-29T03:22:09.203824319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:22:09 pause-783900 dockerd[1297]: time="2024-02-29T03:22:09.649711061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:22:09 pause-783900 dockerd[1297]: time="2024-02-29T03:22:09.649817366Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:22:09 pause-783900 dockerd[1297]: time="2024-02-29T03:22:09.649861968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:22:09 pause-783900 dockerd[1297]: time="2024-02-29T03:22:09.649986674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:22:09 pause-783900 dockerd[1297]: time="2024-02-29T03:22:09.731105599Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 03:22:09 pause-783900 dockerd[1297]: time="2024-02-29T03:22:09.731255406Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 03:22:09 pause-783900 dockerd[1297]: time="2024-02-29T03:22:09.731290308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:22:09 pause-783900 dockerd[1297]: time="2024-02-29T03:22:09.731525719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 03:27:15 pause-783900 systemd[1]: Stopping Docker Application Container Engine...
	Feb 29 03:27:15 pause-783900 dockerd[1290]: time="2024-02-29T03:27:15.296645925Z" level=info msg="Processing signal 'terminated'"
	Feb 29 03:27:15 pause-783900 dockerd[1290]: time="2024-02-29T03:27:15.540946483Z" level=info msg="ignoring event" container=3c60f4309764f4a94f8d540061b596d16a805745e7639e0f6432e83e82b96ea4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.543401185Z" level=info msg="shim disconnected" id=3c60f4309764f4a94f8d540061b596d16a805745e7639e0f6432e83e82b96ea4 namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.543842703Z" level=warning msg="cleaning up after shim disconnected" id=3c60f4309764f4a94f8d540061b596d16a805745e7639e0f6432e83e82b96ea4 namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.543877004Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1290]: time="2024-02-29T03:27:15.552249853Z" level=info msg="ignoring event" container=30e9b3ae5d144ee6cd6456404059cf722dc2edfdd8ac8be0f21c1224906faec2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.556111813Z" level=info msg="shim disconnected" id=30e9b3ae5d144ee6cd6456404059cf722dc2edfdd8ac8be0f21c1224906faec2 namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.556164415Z" level=warning msg="cleaning up after shim disconnected" id=30e9b3ae5d144ee6cd6456404059cf722dc2edfdd8ac8be0f21c1224906faec2 namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.556174916Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.558972032Z" level=info msg="shim disconnected" id=e1799cbe5ab6b10fbd28395ed2493a2cef5de149ec4ef5d0afc7dc799ee67042 namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.559155940Z" level=warning msg="cleaning up after shim disconnected" id=e1799cbe5ab6b10fbd28395ed2493a2cef5de149ec4ef5d0afc7dc799ee67042 namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.559265144Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1290]: time="2024-02-29T03:27:15.571090036Z" level=info msg="ignoring event" container=e1799cbe5ab6b10fbd28395ed2493a2cef5de149ec4ef5d0afc7dc799ee67042 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.585139520Z" level=info msg="shim disconnected" id=a6b6fd2089a535e59f60bf6cc48a286cbdc31643ffd2b999640a0a90d71c1d09 namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1290]: time="2024-02-29T03:27:15.585265225Z" level=info msg="ignoring event" container=a6b6fd2089a535e59f60bf6cc48a286cbdc31643ffd2b999640a0a90d71c1d09 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.585471134Z" level=warning msg="cleaning up after shim disconnected" id=a6b6fd2089a535e59f60bf6cc48a286cbdc31643ffd2b999640a0a90d71c1d09 namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.585578938Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1290]: time="2024-02-29T03:27:15.599997038Z" level=info msg="ignoring event" container=a8cc7b255e87e240cf58453b0d2b33fb121229c22bde862abf0c5a5a40a8be29 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:27:15 pause-783900 dockerd[1290]: time="2024-02-29T03:27:15.600077541Z" level=info msg="ignoring event" container=a49bab14dcdde980b9d4530afc6d1c0a5ce8ba92895e886bbb9ec1d1b54826b3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.600565861Z" level=info msg="shim disconnected" id=a49bab14dcdde980b9d4530afc6d1c0a5ce8ba92895e886bbb9ec1d1b54826b3 namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.600761770Z" level=warning msg="cleaning up after shim disconnected" id=a49bab14dcdde980b9d4530afc6d1c0a5ce8ba92895e886bbb9ec1d1b54826b3 namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.601596004Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.602733051Z" level=info msg="shim disconnected" id=a8cc7b255e87e240cf58453b0d2b33fb121229c22bde862abf0c5a5a40a8be29 namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.602813455Z" level=warning msg="cleaning up after shim disconnected" id=a8cc7b255e87e240cf58453b0d2b33fb121229c22bde862abf0c5a5a40a8be29 namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.602833956Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.629395560Z" level=info msg="shim disconnected" id=f77b060c31fc3678d004c594d48868efad1408dddf2eed267aa3f35664c661c7 namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.629608069Z" level=warning msg="cleaning up after shim disconnected" id=f77b060c31fc3678d004c594d48868efad1408dddf2eed267aa3f35664c661c7 namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.629628570Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1290]: time="2024-02-29T03:27:15.629992185Z" level=info msg="ignoring event" container=f77b060c31fc3678d004c594d48868efad1408dddf2eed267aa3f35664c661c7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:27:15 pause-783900 dockerd[1290]: time="2024-02-29T03:27:15.630052587Z" level=info msg="ignoring event" container=056999a155f8d25e47fadcadcb6ed9b567604f767a7c9e2234e8336808779660 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.653964882Z" level=info msg="shim disconnected" id=056999a155f8d25e47fadcadcb6ed9b567604f767a7c9e2234e8336808779660 namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.655278836Z" level=warning msg="cleaning up after shim disconnected" id=056999a155f8d25e47fadcadcb6ed9b567604f767a7c9e2234e8336808779660 namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.655675453Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.669871243Z" level=info msg="shim disconnected" id=3003714e6fe41464a7a6f9b2abf6a1652e015bac7ee993ea0f83601e7d8e00ae namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.670201457Z" level=warning msg="cleaning up after shim disconnected" id=3003714e6fe41464a7a6f9b2abf6a1652e015bac7ee993ea0f83601e7d8e00ae namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1290]: time="2024-02-29T03:27:15.670484768Z" level=info msg="ignoring event" container=0ed420284b5923509db60be7d0e186c7f3571768d98182eb5527904494af70e1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:27:15 pause-783900 dockerd[1290]: time="2024-02-29T03:27:15.670573172Z" level=info msg="ignoring event" container=3003714e6fe41464a7a6f9b2abf6a1652e015bac7ee993ea0f83601e7d8e00ae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:27:15 pause-783900 dockerd[1290]: time="2024-02-29T03:27:15.670595873Z" level=info msg="ignoring event" container=dd84565820872a90fefb3ddbb518cf7d370081ad3314d452bfb7a8a9ccd1c512 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.670556971Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.682809781Z" level=info msg="shim disconnected" id=0ed420284b5923509db60be7d0e186c7f3571768d98182eb5527904494af70e1 namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.683170196Z" level=warning msg="cleaning up after shim disconnected" id=0ed420284b5923509db60be7d0e186c7f3571768d98182eb5527904494af70e1 namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.683326202Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.691867357Z" level=info msg="shim disconnected" id=dd84565820872a90fefb3ddbb518cf7d370081ad3314d452bfb7a8a9ccd1c512 namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.704693791Z" level=warning msg="cleaning up after shim disconnected" id=dd84565820872a90fefb3ddbb518cf7d370081ad3314d452bfb7a8a9ccd1c512 namespace=moby
	Feb 29 03:27:15 pause-783900 dockerd[1297]: time="2024-02-29T03:27:15.705033705Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 29 03:27:20 pause-783900 dockerd[1297]: time="2024-02-29T03:27:20.493975815Z" level=info msg="shim disconnected" id=b465631830b4c6fe8dd1b9880171515c917ecf95bef44579b35405b9ff01ba57 namespace=moby
	Feb 29 03:27:20 pause-783900 dockerd[1297]: time="2024-02-29T03:27:20.494930454Z" level=warning msg="cleaning up after shim disconnected" id=b465631830b4c6fe8dd1b9880171515c917ecf95bef44579b35405b9ff01ba57 namespace=moby
	Feb 29 03:27:20 pause-783900 dockerd[1290]: time="2024-02-29T03:27:20.495286969Z" level=info msg="ignoring event" container=b465631830b4c6fe8dd1b9880171515c917ecf95bef44579b35405b9ff01ba57 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:27:20 pause-783900 dockerd[1297]: time="2024-02-29T03:27:20.496214808Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 29 03:27:20 pause-783900 dockerd[1297]: time="2024-02-29T03:27:20.521963178Z" level=info msg="shim disconnected" id=fb2b5d637d7c4bab5f2c184e4cb06c36ccb6683cae59c81d1dfcb6c298a4f998 namespace=moby
	Feb 29 03:27:20 pause-783900 dockerd[1297]: time="2024-02-29T03:27:20.522191088Z" level=warning msg="cleaning up after shim disconnected" id=fb2b5d637d7c4bab5f2c184e4cb06c36ccb6683cae59c81d1dfcb6c298a4f998 namespace=moby
	Feb 29 03:27:20 pause-783900 dockerd[1297]: time="2024-02-29T03:27:20.522248990Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 29 03:27:20 pause-783900 dockerd[1290]: time="2024-02-29T03:27:20.522757711Z" level=info msg="ignoring event" container=fb2b5d637d7c4bab5f2c184e4cb06c36ccb6683cae59c81d1dfcb6c298a4f998 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:27:25 pause-783900 dockerd[1290]: time="2024-02-29T03:27:25.450540029Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=ef43c561bf3ce1bad5430555475363a1f11677fad2a8b1aec5929441a829aa37
	Feb 29 03:27:25 pause-783900 dockerd[1290]: time="2024-02-29T03:27:25.508257912Z" level=info msg="ignoring event" container=ef43c561bf3ce1bad5430555475363a1f11677fad2a8b1aec5929441a829aa37 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 03:27:25 pause-783900 dockerd[1297]: time="2024-02-29T03:27:25.509487080Z" level=info msg="shim disconnected" id=ef43c561bf3ce1bad5430555475363a1f11677fad2a8b1aec5929441a829aa37 namespace=moby
	Feb 29 03:27:25 pause-783900 dockerd[1297]: time="2024-02-29T03:27:25.509889402Z" level=warning msg="cleaning up after shim disconnected" id=ef43c561bf3ce1bad5430555475363a1f11677fad2a8b1aec5929441a829aa37 namespace=moby
	Feb 29 03:27:25 pause-783900 dockerd[1297]: time="2024-02-29T03:27:25.509914704Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 29 03:27:25 pause-783900 dockerd[1290]: time="2024-02-29T03:27:25.582813524Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Feb 29 03:27:25 pause-783900 dockerd[1290]: time="2024-02-29T03:27:25.583093940Z" level=info msg="Daemon shutdown complete"
	Feb 29 03:27:25 pause-783900 dockerd[1290]: time="2024-02-29T03:27:25.584090095Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Feb 29 03:27:25 pause-783900 dockerd[1290]: time="2024-02-29T03:27:25.584109396Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Feb 29 03:27:26 pause-783900 systemd[1]: docker.service: Deactivated successfully.
	Feb 29 03:27:26 pause-783900 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 03:27:26 pause-783900 systemd[1]: docker.service: Consumed 14.662s CPU time.
	Feb 29 03:27:26 pause-783900 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 03:27:26 pause-783900 dockerd[7250]: time="2024-02-29T03:27:26.669670633Z" level=info msg="Starting up"
	Feb 29 03:28:26 pause-783900 dockerd[7250]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Feb 29 03:28:26 pause-783900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Feb 29 03:28:26 pause-783900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Feb 29 03:28:26 pause-783900 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0229 03:28:26.592580   13892 out.go:239] * 
	* 
	W0229 03:28:26.594573   13892 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 03:28:26.594573   13892 out.go:177] 

                                                
                                                
** /stderr **
pause_test.go:94: failed to second start a running minikube with args: "out/minikube-windows-amd64.exe start -p pause-783900 --alsologtostderr -v=1 --driver=hyperv" : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-783900 -n pause-783900
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-783900 -n pause-783900: exit status 2 (13.8629273s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 03:28:27.055240    8940 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p pause-783900 logs -n 25
E0229 03:29:29.014509    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p pause-783900 logs -n 25: (2m46.7436838s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | -p custom-flannel-103800                             | custom-flannel-103800 | minikube5\jenkins | v1.32.0 | 29 Feb 24 03:28 UTC |                     |
	|         | sudo cat                                             |                       |                   |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                       |                   |         |                     |                     |
	| ssh     | -p custom-flannel-103800 sudo                        | custom-flannel-103800 | minikube5\jenkins | v1.32.0 | 29 Feb 24 03:28 UTC |                     |
	|         | systemctl status docker --all                        |                       |                   |         |                     |                     |
	|         | --full --no-pager                                    |                       |                   |         |                     |                     |
	| ssh     | -p custom-flannel-103800                             | custom-flannel-103800 | minikube5\jenkins | v1.32.0 | 29 Feb 24 03:28 UTC |                     |
	|         | sudo systemctl cat docker                            |                       |                   |         |                     |                     |
	|         | --no-pager                                           |                       |                   |         |                     |                     |
	| ssh     | -p custom-flannel-103800 sudo                        | custom-flannel-103800 | minikube5\jenkins | v1.32.0 | 29 Feb 24 03:28 UTC |                     |
	|         | cat /etc/docker/daemon.json                          |                       |                   |         |                     |                     |
	| ssh     | -p custom-flannel-103800 sudo                        | custom-flannel-103800 | minikube5\jenkins | v1.32.0 | 29 Feb 24 03:28 UTC |                     |
	|         | docker system info                                   |                       |                   |         |                     |                     |
	| ssh     | -p custom-flannel-103800 sudo                        | custom-flannel-103800 | minikube5\jenkins | v1.32.0 | 29 Feb 24 03:28 UTC |                     |
	|         | systemctl status cri-docker                          |                       |                   |         |                     |                     |
	|         | --all --full --no-pager                              |                       |                   |         |                     |                     |
	| ssh     | -p custom-flannel-103800                             | custom-flannel-103800 | minikube5\jenkins | v1.32.0 | 29 Feb 24 03:28 UTC |                     |
	|         | sudo systemctl cat cri-docker                        |                       |                   |         |                     |                     |
	|         | --no-pager                                           |                       |                   |         |                     |                     |
	| ssh     | -p custom-flannel-103800 sudo cat                    | custom-flannel-103800 | minikube5\jenkins | v1.32.0 | 29 Feb 24 03:28 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |                   |         |                     |                     |
	| ssh     | -p custom-flannel-103800 sudo cat                    | custom-flannel-103800 | minikube5\jenkins | v1.32.0 | 29 Feb 24 03:28 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |                   |         |                     |                     |
	| ssh     | -p custom-flannel-103800 sudo                        | custom-flannel-103800 | minikube5\jenkins | v1.32.0 | 29 Feb 24 03:28 UTC |                     |
	|         | cri-dockerd --version                                |                       |                   |         |                     |                     |
	| ssh     | -p auto-103800 sudo systemctl                        | auto-103800           | minikube5\jenkins | v1.32.0 | 29 Feb 24 03:28 UTC |                     |
	|         | status cri-docker --all --full                       |                       |                   |         |                     |                     |
	|         | --no-pager                                           |                       |                   |         |                     |                     |
	| ssh     | -p kindnet-103800 sudo                               | kindnet-103800        | minikube5\jenkins | v1.32.0 | 29 Feb 24 03:28 UTC | 29 Feb 24 03:28 UTC |
	|         | iptables-save                                        |                       |                   |         |                     |                     |
	| ssh     | -p custom-flannel-103800 sudo                        | custom-flannel-103800 | minikube5\jenkins | v1.32.0 | 29 Feb 24 03:28 UTC |                     |
	|         | systemctl status containerd                          |                       |                   |         |                     |                     |
	|         | --all --full --no-pager                              |                       |                   |         |                     |                     |
	| ssh     | -p custom-flannel-103800                             | custom-flannel-103800 | minikube5\jenkins | v1.32.0 | 29 Feb 24 03:28 UTC |                     |
	|         | sudo systemctl cat containerd                        |                       |                   |         |                     |                     |
	|         | --no-pager                                           |                       |                   |         |                     |                     |
	| ssh     | -p custom-flannel-103800 sudo cat                    | custom-flannel-103800 | minikube5\jenkins | v1.32.0 | 29 Feb 24 03:28 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                       |                   |         |                     |                     |
	| ssh     | -p custom-flannel-103800                             | custom-flannel-103800 | minikube5\jenkins | v1.32.0 | 29 Feb 24 03:28 UTC |                     |
	|         | sudo cat                                             |                       |                   |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                       |                   |         |                     |                     |
	| ssh     | -p custom-flannel-103800 sudo                        | custom-flannel-103800 | minikube5\jenkins | v1.32.0 | 29 Feb 24 03:28 UTC |                     |
	|         | containerd config dump                               |                       |                   |         |                     |                     |
	| ssh     | -p custom-flannel-103800 sudo                        | custom-flannel-103800 | minikube5\jenkins | v1.32.0 | 29 Feb 24 03:28 UTC |                     |
	|         | systemctl status crio --all                          |                       |                   |         |                     |                     |
	|         | --full --no-pager                                    |                       |                   |         |                     |                     |
	| ssh     | -p custom-flannel-103800 sudo                        | custom-flannel-103800 | minikube5\jenkins | v1.32.0 | 29 Feb 24 03:28 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                       |                   |         |                     |                     |
	| ssh     | -p custom-flannel-103800 sudo                        | custom-flannel-103800 | minikube5\jenkins | v1.32.0 | 29 Feb 24 03:28 UTC |                     |
	|         | find /etc/crio -type f -exec                         |                       |                   |         |                     |                     |
	|         | sh -c 'echo {}; cat {}' \;                           |                       |                   |         |                     |                     |
	| ssh     | -p custom-flannel-103800 sudo                        | custom-flannel-103800 | minikube5\jenkins | v1.32.0 | 29 Feb 24 03:28 UTC |                     |
	|         | crio config                                          |                       |                   |         |                     |                     |
	| delete  | -p custom-flannel-103800                             | custom-flannel-103800 | minikube5\jenkins | v1.32.0 | 29 Feb 24 03:28 UTC | 29 Feb 24 03:28 UTC |
	| start   | -p false-103800 --memory=3072                        | false-103800          | minikube5\jenkins | v1.32.0 | 29 Feb 24 03:28 UTC |                     |
	|         | --alsologtostderr --wait=true                        |                       |                   |         |                     |                     |
	|         | --wait-timeout=15m --cni=false                       |                       |                   |         |                     |                     |
	|         | --driver=hyperv                                      |                       |                   |         |                     |                     |
	| ssh     | -p auto-103800 sudo systemctl                        | auto-103800           | minikube5\jenkins | v1.32.0 | 29 Feb 24 03:28 UTC |                     |
	|         | cat cri-docker --no-pager                            |                       |                   |         |                     |                     |
	| ssh     | -p kindnet-103800 sudo                               | kindnet-103800        | minikube5\jenkins | v1.32.0 | 29 Feb 24 03:28 UTC |                     |
	|         | iptables -t nat -L -n -v                             |                       |                   |         |                     |                     |
	|---------|------------------------------------------------------|-----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 03:28:31
	Running on machine: minikube5
	Binary: Built with gc go1.22.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 03:28:31.514839    8520 out.go:291] Setting OutFile to fd 1984 ...
	I0229 03:28:31.514839    8520 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 03:28:31.514839    8520 out.go:304] Setting ErrFile to fd 2036...
	I0229 03:28:31.514839    8520 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 03:28:31.535840    8520 out.go:298] Setting JSON to false
	I0229 03:28:31.540844    8520 start.go:129] hostinfo: {"hostname":"minikube5","uptime":273538,"bootTime":1708903773,"procs":204,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0229 03:28:31.540844    8520 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 03:28:31.541846    8520 out.go:177] * [false-103800] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0229 03:28:31.542842    8520 notify.go:220] Checking for updates...
	I0229 03:28:31.543843    8520 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 03:28:31.543843    8520 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 03:28:31.544845    8520 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0229 03:28:31.545844    8520 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 03:28:31.546842    8520 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 03:28:31.548841    8520 config.go:182] Loaded profile config "auto-103800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 03:28:31.548841    8520 config.go:182] Loaded profile config "kindnet-103800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 03:28:31.549845    8520 config.go:182] Loaded profile config "pause-783900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 03:28:31.549845    8520 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 03:28:37.487230    8520 out.go:177] * Using the hyperv driver based on user configuration
	I0229 03:28:37.488014    8520 start.go:299] selected driver: hyperv
	I0229 03:28:37.488648    8520 start.go:903] validating driver "hyperv" against <nil>
	I0229 03:28:37.488648    8520 start.go:914] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 03:28:37.538727    8520 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 03:28:37.540329    8520 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 03:28:37.540329    8520 cni.go:84] Creating CNI manager for "false"
	I0229 03:28:37.540329    8520 start_flags.go:323] config:
	{Name:false-103800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:false-103800 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRI
Socket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 03:28:37.541251    8520 iso.go:125] acquiring lock: {Name:mk91f2ee29fbed5605669750e8cfa308a1229357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 03:28:37.542980    8520 out.go:177] * Starting control plane node false-103800 in cluster false-103800
	I0229 03:28:37.543356    8520 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 03:28:37.543356    8520 preload.go:148] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0229 03:28:37.543356    8520 cache.go:56] Caching tarball of preloaded images
	I0229 03:28:37.544006    8520 preload.go:174] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 03:28:37.544216    8520 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0229 03:28:37.544298    8520 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\false-103800\config.json ...
	I0229 03:28:37.544298    8520 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\false-103800\config.json: {Name:mkcfdededb1a9b2d86b83b90afc2a65f073d029d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 03:28:37.545719    8520 start.go:365] acquiring machines lock for false-103800: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 03:28:37.546353    8520 start.go:369] acquired machines lock for "false-103800" in 532µs
	I0229 03:28:37.546425    8520 start.go:93] Provisioning new machine with config: &{Name:false-103800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.4 ClusterName:false-103800 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 03:28:37.546425    8520 start.go:125] createHost starting for "" (driver="hyperv")
	
	
	==> Docker <==
	Feb 29 03:27:26 pause-783900 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 03:27:26 pause-783900 dockerd[7250]: time="2024-02-29T03:27:26.669670633Z" level=info msg="Starting up"
	Feb 29 03:28:26 pause-783900 dockerd[7250]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Feb 29 03:28:26 pause-783900 cri-dockerd[1184]: time="2024-02-29T03:28:26Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get docker info"
	Feb 29 03:28:26 pause-783900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Feb 29 03:28:26 pause-783900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Feb 29 03:28:26 pause-783900 systemd[1]: Failed to start Docker Application Container Engine.
	Feb 29 03:28:26 pause-783900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
	Feb 29 03:28:26 pause-783900 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 03:28:26 pause-783900 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 03:28:26 pause-783900 dockerd[7390]: time="2024-02-29T03:28:26.916452652Z" level=info msg="Starting up"
	Feb 29 03:29:26 pause-783900 dockerd[7390]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Feb 29 03:29:26 pause-783900 cri-dockerd[1184]: time="2024-02-29T03:29:26Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get docker info"
	Feb 29 03:29:26 pause-783900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Feb 29 03:29:26 pause-783900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Feb 29 03:29:26 pause-783900 systemd[1]: Failed to start Docker Application Container Engine.
	Feb 29 03:29:27 pause-783900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Feb 29 03:29:27 pause-783900 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 03:29:27 pause-783900 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 03:29:27 pause-783900 dockerd[7611]: time="2024-02-29T03:29:27.218047164Z" level=info msg="Starting up"
	Feb 29 03:30:27 pause-783900 dockerd[7611]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Feb 29 03:30:27 pause-783900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Feb 29 03:30:27 pause-783900 cri-dockerd[1184]: time="2024-02-29T03:30:27Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get docker info"
	Feb 29 03:30:27 pause-783900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Feb 29 03:30:27 pause-783900 systemd[1]: Failed to start Docker Application Container Engine.
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-02-29T03:30:29Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +42.746958] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.184661] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[Feb29 03:21] systemd-fstab-generator[914]: Ignoring "noauto" option for root device
	[  +0.108697] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.558187] systemd-fstab-generator[953]: Ignoring "noauto" option for root device
	[  +0.202367] systemd-fstab-generator[966]: Ignoring "noauto" option for root device
	[  +0.232786] systemd-fstab-generator[980]: Ignoring "noauto" option for root device
	[  +1.842778] systemd-fstab-generator[1137]: Ignoring "noauto" option for root device
	[  +0.208716] systemd-fstab-generator[1149]: Ignoring "noauto" option for root device
	[  +0.192853] systemd-fstab-generator[1161]: Ignoring "noauto" option for root device
	[  +0.295832] systemd-fstab-generator[1176]: Ignoring "noauto" option for root device
	[ +13.774361] systemd-fstab-generator[1282]: Ignoring "noauto" option for root device
	[  +0.107492] kauditd_printk_skb: 205 callbacks suppressed
	[  +8.923161] systemd-fstab-generator[1662]: Ignoring "noauto" option for root device
	[  +0.107310] kauditd_printk_skb: 51 callbacks suppressed
	[  +9.315815] systemd-fstab-generator[2609]: Ignoring "noauto" option for root device
	[  +0.136011] kauditd_printk_skb: 62 callbacks suppressed
	[Feb29 03:22] kauditd_printk_skb: 12 callbacks suppressed
	[Feb29 03:27] systemd-fstab-generator[6780]: Ignoring "noauto" option for root device
	[  +0.206897] kauditd_printk_skb: 60 callbacks suppressed
	[  +0.597491] systemd-fstab-generator[6817]: Ignoring "noauto" option for root device
	[  +0.331327] systemd-fstab-generator[6829]: Ignoring "noauto" option for root device
	[  +0.439315] systemd-fstab-generator[6844]: Ignoring "noauto" option for root device
	[  +5.451430] kauditd_printk_skb: 87 callbacks suppressed
	
	
	==> kernel <==
	 03:31:27 up 11 min,  0 users,  load average: 0.05, 0.19, 0.14
	Linux pause-783900 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Feb 29 03:31:24 pause-783900 kubelet[2635]: E0229 03:31:24.988355    2635 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-783900?timeout=10s\": dial tcp 172.19.15.194:8443: connect: connection refused" interval="7s"
	Feb 29 03:31:26 pause-783900 kubelet[2635]: E0229 03:31:26.363093    2635 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-pause-783900.17b837a3eaba2b20", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-pause-783900", UID:"7e6854014e7efdb8cd8e323c55ef858a", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readiness probe failed: Get \"https://172.19.15.194:8443/readyz\": dial tcp 172.19.15.194:8443:
connect: connection refused", Source:v1.EventSource{Component:"kubelet", Host:"pause-783900"}, FirstTimestamp:time.Date(2024, time.February, 29, 3, 27, 15, 744369440, time.Local), LastTimestamp:time.Date(2024, time.February, 29, 3, 27, 17, 744442597, time.Local), Count:3, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"pause-783900"}': 'Patch "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/kube-apiserver-pause-783900.17b837a3eaba2b20": dial tcp 172.19.15.194:8443: connect: connection refused'(may retry after sleeping)
	Feb 29 03:31:27 pause-783900 kubelet[2635]: E0229 03:31:27.157446    2635 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-783900\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-783900?resourceVersion=0&timeout=10s\": dial tcp 172.19.15.194:8443: connect: connection refused"
	Feb 29 03:31:27 pause-783900 kubelet[2635]: E0229 03:31:27.158853    2635 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-783900\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-783900?timeout=10s\": dial tcp 172.19.15.194:8443: connect: connection refused"
	Feb 29 03:31:27 pause-783900 kubelet[2635]: E0229 03:31:27.159956    2635 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-783900\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-783900?timeout=10s\": dial tcp 172.19.15.194:8443: connect: connection refused"
	Feb 29 03:31:27 pause-783900 kubelet[2635]: E0229 03:31:27.161196    2635 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-783900\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-783900?timeout=10s\": dial tcp 172.19.15.194:8443: connect: connection refused"
	Feb 29 03:31:27 pause-783900 kubelet[2635]: E0229 03:31:27.163690    2635 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-783900\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-783900?timeout=10s\": dial tcp 172.19.15.194:8443: connect: connection refused"
	Feb 29 03:31:27 pause-783900 kubelet[2635]: E0229 03:31:27.163721    2635 kubelet_node_status.go:527] "Unable to update node status" err="update node status exceeds retry count"
	Feb 29 03:31:27 pause-783900 kubelet[2635]: E0229 03:31:27.452652    2635 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Feb 29 03:31:27 pause-783900 kubelet[2635]: E0229 03:31:27.452728    2635 kuberuntime_image.go:103] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Feb 29 03:31:27 pause-783900 kubelet[2635]: I0229 03:31:27.452745    2635 image_gc_manager.go:210] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Feb 29 03:31:27 pause-783900 kubelet[2635]: E0229 03:31:27.452919    2635 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Feb 29 03:31:27 pause-783900 kubelet[2635]: E0229 03:31:27.453081    2635 kuberuntime_container.go:477] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Feb 29 03:31:27 pause-783900 kubelet[2635]: E0229 03:31:27.453270    2635 kubelet.go:2865] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Feb 29 03:31:27 pause-783900 kubelet[2635]: E0229 03:31:27.453358    2635 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Feb 29 03:31:27 pause-783900 kubelet[2635]: E0229 03:31:27.453378    2635 container_log_manager.go:185] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Feb 29 03:31:27 pause-783900 kubelet[2635]: E0229 03:31:27.453407    2635 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Feb 29 03:31:27 pause-783900 kubelet[2635]: E0229 03:31:27.453450    2635 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Feb 29 03:31:27 pause-783900 kubelet[2635]: E0229 03:31:27.453466    2635 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Feb 29 03:31:27 pause-783900 kubelet[2635]: E0229 03:31:27.455395    2635 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Feb 29 03:31:27 pause-783900 kubelet[2635]: E0229 03:31:27.455635    2635 kuberuntime_container.go:477] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Feb 29 03:31:27 pause-783900 kubelet[2635]: E0229 03:31:27.456337    2635 kubelet.go:1402] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	Feb 29 03:31:27 pause-783900 kubelet[2635]: E0229 03:31:27.456747    2635 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Feb 29 03:31:27 pause-783900 kubelet[2635]: E0229 03:31:27.456783    2635 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/info\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Feb 29 03:31:27 pause-783900 kubelet[2635]: E0229 03:31:27.521824    2635 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 4m13.002777867s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.42/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 03:28:40.937452   14272 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0229 03:29:26.764177   14272 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0229 03:29:26.812165   14272 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0229 03:29:26.856633   14272 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0229 03:29:26.896667   14272 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0229 03:29:26.929723   14272 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0229 03:30:27.065160   14272 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0229 03:30:27.112737   14272 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-783900 -n pause-783900
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-783900 -n pause-783900: exit status 2 (13.156215s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 03:31:27.688493    7228 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "pause-783900" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (567.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10800.473s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-506700 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperv --kubernetes-version=v1.28.4
E0229 03:41:59.016033    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-103800\client.crt: The system cannot find the path specified.
panic: test timed out after 3h0m0s
running tests:
	TestNetworkPlugins (47m28s)
	TestNetworkPlugins/group/bridge (10m37s)
	TestNetworkPlugins/group/kubenet (9m33s)
	TestStartStop (28m2s)
	TestStartStop/group/default-k8s-diff-port (23s)
	TestStartStop/group/default-k8s-diff-port/serial (23s)
	TestStartStop/group/default-k8s-diff-port/serial/FirstStart (23s)
	TestStartStop/group/embed-certs (1m16s)
	TestStartStop/group/embed-certs/serial (1m16s)
	TestStartStop/group/embed-certs/serial/FirstStart (1m16s)

                                                
                                                
goroutine 2747 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 23 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc00061da00, 0xc00137bbb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc00002e2b8, {0x4db8f40, 0x2a, 0x2a}, {0x2b29683?, 0xa481af?, 0x4ddba80?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc0006df040)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc0006df040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 11 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc000111c80)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 2725 [select]:
os/exec.(*Cmd).watchCtx(0xc002a5c840, 0xc002985020)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2722
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 24 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1174 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 23
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1170 +0x171

                                                
                                                
goroutine 1577 [chan receive, 29 minutes]:
testing.(*T).Run(0xc00072cea0, {0x2acefa8?, 0xad75b3?}, 0x352aea8)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc00072cea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc00072cea0, 0x352acd0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 191 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 190
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 138 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000992480)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 201
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 189 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc000a82290, 0x3d)
	/usr/local/go/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x25e9920?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000992360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000a822c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000b02560, {0x3a714a0, 0xc000a8b830}, 0x1, 0xc0000542a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000b02560, 0x3b9aca00, 0x0, 0x1, 0xc0000542a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 139
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 190 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3a93dc0, 0xc0000542a0}, 0xc002271f50, 0xc002271f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3a93dc0, 0xc0000542a0}, 0xa0?, 0xc002271f50, 0xc002271f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3a93dc0?, 0xc0000542a0?}, 0x0?, 0xad7ee0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc002271fd0?, 0xb1e684?, 0xc0020e00f0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 139
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 139 [chan receive, 173 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000a822c0, 0xc0000542a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 201
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2589 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc000a70f50, 0x0)
	/usr/local/go/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x25e9920?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002423bc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000a70f80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00225a960, {0x3a714a0, 0xc00308a090}, 0x1, 0xc0000542a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00225a960, 0x3b9aca00, 0x0, 0x1, 0xc0000542a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2511
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2591 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2590
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2742 [syscall, locked to thread]:
syscall.SyscallN(0x10?, {0xc0023a1b20?, 0x9a7f45?, 0x4e68ec0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x4d?, 0xc0023a1b80?, 0x99fe76?, 0x4e68ec0?, 0xc0023a1c08?, 0x992a45?, 0x1627be80108?, 0x4d?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x6c0, {0xc00275220d?, 0x5f3, 0xc002752000?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc00260c508?, {0xc00275220d?, 0x9cc25e?, 0x800?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc00260c508, {0xc00275220d, 0x5f3, 0x5f3})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0000a7930, {0xc00275220d?, 0xc000585dc0?, 0x20d?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00087de30, {0x3a70060, 0xc000920328})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3a701a0, 0xc00087de30}, {0x3a70060, 0xc000920328}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0023a1e78?, {0x3a701a0, 0xc00087de30})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4d6d6c0?, {0x3a701a0?, 0xc00087de30?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3a701a0, 0xc00087de30}, {0x3a70120, 0xc0000a7930}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0029851a0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2736
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2756 [syscall, locked to thread]:
syscall.SyscallN(0xad55c0?, {0xc0013f9b20?, 0x9a7f45?, 0x4e68ec0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x59?, 0xc0013f9b80?, 0x99fe76?, 0x4e68ec0?, 0xc0013f9c08?, 0x9928db?, 0x988c66?, 0x25e9959?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x700, {0xc0026ea53a?, 0x2c6, 0xc0026ea400?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc00269b408?, {0xc0026ea53a?, 0x9cc25e?, 0x400?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc00269b408, {0xc0026ea53a, 0x2c6, 0x2c6})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000920378, {0xc0026ea53a?, 0xc0013f9d98?, 0x13a?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc000adf6e0, {0x3a70060, 0xc0006329d0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3a701a0, 0xc000adf6e0}, {0x3a70060, 0xc0006329d0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3a701a0, 0xc000adf6e0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4d6d6c0?, {0x3a701a0?, 0xc000adf6e0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3a701a0, 0xc000adf6e0}, {0x3a70120, 0xc000920378}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc003088600?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1567
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2736 [syscall, locked to thread]:
syscall.SyscallN(0x7ff9c0844de0?, {0xc000721ae0?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x744, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc0024b0a20)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc002a5cf20)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc002a5cf20)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc00061dba0, 0xc002a5cf20)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateFirstStart({0x3a93c00?, 0xc0009063f0?}, 0xc00061dba0, {0xc00067b2c0?, 0x65dffd06?}, {0xc00b3c624c?, 0xc000721f60?}, {0xad75b3?, 0xa28eaf?}, {0xc000a08f00, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:186 +0xd5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc00061dba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc00061dba0, 0xc000111980)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2735
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1567 [syscall, locked to thread]:
syscall.SyscallN(0x7ff9c0844de0?, {0xc00006b108?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x0?, 0x1?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x628, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc000a89320)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc002d81600)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc002d81600)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
os/exec.(*Cmd).CombinedOutput(0xc002d81600)
	/usr/local/go/src/os/exec/exec.go:1012 +0x85
k8s.io/minikube/test/integration.debugLogs(0xc0020ee820, {0xc0007040b0, 0xd})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:630 +0xafe5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0020ee820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:211 +0xbcc
testing.tRunner(0xc0020ee820, 0xc002678100)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1564
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1487 [chan receive, 47 minutes]:
testing.(*T).Run(0xc00072c1a0, {0x2acefa8?, 0x9ff56d?}, 0xc000b04030)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc00072c1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc00072c1a0, 0x352ac88)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1009 [chan send, 138 minutes]:
os/exec.(*Cmd).watchCtx(0xc000994f20, 0xc002590000)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 797
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2338 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3a93dc0, 0xc0000542a0}, 0xc0024edf50, 0xc0024edf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3a93dc0, 0xc0000542a0}, 0x80?, 0xc0024edf50, 0xc0024edf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3a93dc0?, 0xc0000542a0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xb1e625?, 0xc002a8c000?, 0xc002c10180?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2334
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 1802 [chan receive]:
testing.(*T).Run(0xc00072da00, {0x2ad04b1?, 0x0?}, 0xc002678a00)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00072da00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc00072da00, 0xc0030886c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1796
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 918 [chan send, 140 minutes]:
os/exec.(*Cmd).watchCtx(0xc000a5e6e0, 0xc002591620)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 917
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2716 [chan receive]:
testing.(*T).Run(0xc002bac680, {0x2ad9a41?, 0x60400000004?}, 0xc000111380)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc002bac680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc002bac680, 0xc002678a00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1802
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1865 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3a93dc0, 0xc0000542a0}, 0xc0021edf50, 0xc0021edf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3a93dc0, 0xc0000542a0}, 0xa0?, 0xc0021edf50, 0xc0021edf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3a93dc0?, 0xc0000542a0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0021edfd0?, 0xb1e684?, 0xc0004af7a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 1893
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2339 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2338
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2744 [select]:
os/exec.(*Cmd).watchCtx(0xc002a5cf20, 0xc002c11620)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2736
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 1893 [chan receive, 15 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000a82500, 0xc0000542a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 1891
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cache.go:122 +0x585

                                                
                                                
goroutine 1566 [syscall, locked to thread]:
syscall.SyscallN(0x7ff9c0844de0?, {0xc002719108?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x0?, 0x1?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x410, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc002507b90)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc002c79080)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc002c79080)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
os/exec.(*Cmd).CombinedOutput(0xc002c79080)
	/usr/local/go/src/os/exec/exec.go:1012 +0x85
k8s.io/minikube/test/integration.debugLogs(0xc0020ee4e0, {0xc000704080, 0xe})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:562 +0x8e05
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0020ee4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:211 +0xbcc
testing.tRunner(0xc0020ee4e0, 0xc002678080)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1564
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1799 [chan receive]:
testing.(*T).Run(0xc00072d520, {0x2ad04b1?, 0x0?}, 0xc000111900)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00072d520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc00072d520, 0xc0030885c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1796
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1564 [chan receive, 11 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0020ee1a0, 0xc000b04030)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 1487
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2257 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc002b6e310, 0x1)
	/usr/local/go/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x25e9920?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00219c960)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002b6e340)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc002da0010, {0x3a714a0, 0xc0025c2000}, 0x1, 0xc0000542a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc002da0010, 0x3b9aca00, 0x0, 0x1, 0xc0000542a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2334
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 684 [IO wait, 163 minutes]:
internal/poll.runtime_pollWait(0x1627d365db0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0x99fe76?, 0x4e68ec0?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.execIO(0xc000af5920, 0xc00136dbb0)
	/usr/local/go/src/internal/poll/fd_windows.go:175 +0xe6
internal/poll.(*FD).acceptOne(0xc000af5908, 0x280, {0xc0013f2000?, 0x0?, 0x0?}, 0xc000581008?)
	/usr/local/go/src/internal/poll/fd_windows.go:944 +0x67
internal/poll.(*FD).Accept(0xc000af5908, 0xc00136dd90)
	/usr/local/go/src/internal/poll/fd_windows.go:978 +0x1bc
net.(*netFD).accept(0xc000af5908)
	/usr/local/go/src/net/fd_windows.go:178 +0x54
net.(*TCPListener).accept(0xc0025982a0)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc0025982a0)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0009320f0, {0x3a87850, 0xc0025982a0})
	/usr/local/go/src/net/http/server.go:3255 +0x33e
net/http.(*Server).ListenAndServe(0xc0009320f0)
	/usr/local/go/src/net/http/server.go:3184 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc00280e9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 681
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 1798 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc00088db80)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00072d380)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00072d380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00072d380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc00072d380, 0xc003088580)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1796
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2735 [chan receive]:
testing.(*T).Run(0xc00061d860, {0x2ad9a41?, 0x60400000004?}, 0xc000111980)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc00061d860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc00061d860, 0xc000111900)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1799
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1864 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc000a824d0, 0x13)
	/usr/local/go/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x25e9920?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00219c7e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000a82500)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00068a050, {0x3a714a0, 0xc000ade030}, 0x1, 0xc0000542a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00068a050, 0x3b9aca00, 0x0, 0x1, 0xc0000542a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 1893
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 1892 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00219ca80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 1891
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 1866 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 1865
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2334 [chan receive, 7 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002b6e340, 0xc0000542a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2332
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cache.go:122 +0x585

                                                
                                                
goroutine 1796 [chan receive, 3 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc00072d040, 0x352aea8)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 1577
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2510 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002423ce0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2585
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2590 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3a93dc0, 0xc0000542a0}, 0xc00261df50, 0xc00261df98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3a93dc0, 0xc0000542a0}, 0x90?, 0xc00261df50, 0xc00261df98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3a93dc0?, 0xc0000542a0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00261dfd0?, 0xb1e684?, 0xc00261dfa0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2511
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2723 [syscall, locked to thread]:
syscall.SyscallN(0x0?, {0xc002dfdb20?, 0x9a7f45?, 0x4e68ec0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x77?, 0xc002dfdb80?, 0x99fe76?, 0x4e68ec0?, 0xc002dfdc08?, 0x9928db?, 0x988c66?, 0x77?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x5b8, {0xc000985a2e?, 0x5d2, 0xa442bf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc002c3aa08?, {0xc000985a2e?, 0x9cc25e?, 0x800?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc002c3aa08, {0xc000985a2e, 0x5d2, 0x5d2})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0000a7718, {0xc000985a2e?, 0xc002dfdd98?, 0x22d?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00252fda0, {0x3a70060, 0xc0006323a8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3a701a0, 0xc00252fda0}, {0x3a70060, 0xc0006323a8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3a701a0, 0xc00252fda0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4d6d6c0?, {0x3a701a0?, 0xc00252fda0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3a701a0, 0xc00252fda0}, {0x3a70120, 0xc0000a7718}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0x0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2722
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2333 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00219ccc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2332
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2743 [syscall, locked to thread]:
syscall.SyscallN(0x13a?, {0xc003003b20?, 0x9a7f45?, 0x4e68ec0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xffffffffffff59?, 0xc003003b80?, 0x99fe76?, 0x4e68ec0?, 0xc003003c08?, 0x992a45?, 0x1627be80108?, 0x6174656d5c663867?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x764, {0xc002743cab?, 0x355, 0xa442bf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc00260ca08?, {0xc002743cab?, 0x4f2ef47b4f2ef47b?, 0x2000?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc00260ca08, {0xc002743cab, 0x355, 0x355})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0000a7948, {0xc002743cab?, 0xc003003d98?, 0x1000?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00087de60, {0x3a70060, 0xc000a107e8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3a701a0, 0xc00087de60}, {0x3a70060, 0xc000a107e8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3a701a0, 0xc00087de60})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4d6d6c0?, {0x3a701a0?, 0xc00087de60?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3a701a0, 0xc00087de60}, {0x3a70120, 0xc0000a7948}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc003003fa0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2736
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2697 [syscall, locked to thread]:
syscall.SyscallN(0x0?, {0xc003001b20?, 0x9a7f45?, 0x4e68ec0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x35?, 0xc003001b80?, 0x99fe76?, 0x4e68ec0?, 0xc003001c08?, 0x9928db?, 0x988c66?, 0x35?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x740, {0xc00273e93a?, 0x2c6, 0xc00273e800?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc002468008?, {0xc00273e93a?, 0x9cc211?, 0x400?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc002468008, {0xc00273e93a, 0x2c6, 0x2c6})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000a10888, {0xc00273e93a?, 0xc003001d98?, 0x13a?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc000ac4c00, {0x3a70060, 0xc0006329c8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3a701a0, 0xc000ac4c00}, {0x3a70060, 0xc0006329c8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3a701a0, 0xc000ac4c00})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4d6d6c0?, {0x3a701a0?, 0xc000ac4c00?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3a701a0, 0xc000ac4c00}, {0x3a70120, 0xc000a10888}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0x0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1566
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2511 [chan receive, 3 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000a70f80, 0xc0000542a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2585
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2724 [syscall, locked to thread]:
syscall.SyscallN(0xc0029a3b10?, {0xc0029a3b20?, 0x9a7f45?, 0x4de8940?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x10000c0004be067?, 0xc0029a3b80?, 0x99fe76?, 0x4e68ec0?, 0xc0029a3c08?, 0x992a45?, 0x1627be80108?, 0x8000?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x6e4, {0xc0025b588d?, 0x2773, 0xa442bf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc002c3b188?, {0xc0025b588d?, 0x2a0c?, 0x2a0c?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc002c3b188, {0xc0025b588d, 0x2773, 0x2773})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0000a7730, {0xc0025b588d?, 0xc0029a3d98?, 0x3e1c?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00252fdd0, {0x3a70060, 0xc000920210})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3a701a0, 0xc00252fdd0}, {0x3a70060, 0xc000920210}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3a701a0, 0xc00252fdd0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4d6d6c0?, {0x3a701a0?, 0xc00252fdd0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3a701a0, 0xc00252fdd0}, {0x3a70120, 0xc0000a7730}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc002678000?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2722
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2722 [syscall, locked to thread]:
syscall.SyscallN(0x7ff9c0844de0?, {0xc000429ae0?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x328, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc0023e85d0)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc002a5c840)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc002a5c840)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc00061d520, 0xc002a5c840)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateFirstStart({0x3a93c00?, 0xc000496a80?}, 0xc00061d520, {0xc0013fc378?, 0x65dffcd1?}, {0xc002e7caf0?, 0xc000429f60?}, {0xad75b3?, 0xa28eaf?}, {0xc0022a4a00, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:186 +0xd5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc00061d520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc00061d520, 0xc000111380)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2716
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                    

Test pass (122/207)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 15.47
4 TestDownloadOnly/v1.16.0/preload-exists 0.06
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.25
9 TestDownloadOnly/v1.16.0/DeleteAll 1.08
10 TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds 1.06
12 TestDownloadOnly/v1.28.4/json-events 11.67
13 TestDownloadOnly/v1.28.4/preload-exists 0
16 TestDownloadOnly/v1.28.4/kubectl 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.23
18 TestDownloadOnly/v1.28.4/DeleteAll 1.15
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 1.08
21 TestDownloadOnly/v1.29.0-rc.2/json-events 10.83
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.23
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 1.09
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 1.08
30 TestBinaryMirror 6.63
31 TestOffline 399.18
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.23
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.24
36 TestAddons/Setup 372.53
39 TestAddons/parallel/Ingress 65.93
40 TestAddons/parallel/InspektorGadget 25.78
41 TestAddons/parallel/MetricsServer 21.56
42 TestAddons/parallel/HelmTiller 39.99
44 TestAddons/parallel/CSI 91.56
45 TestAddons/parallel/Headlamp 33.5
46 TestAddons/parallel/CloudSpanner 21.28
47 TestAddons/parallel/LocalPath 28.8
48 TestAddons/parallel/NvidiaDevicePlugin 20.32
49 TestAddons/parallel/Yakd 6.01
52 TestAddons/serial/GCPAuth/Namespaces 0.29
53 TestAddons/StoppedEnableDisable 47.58
54 TestCertOptions 338.65
55 TestCertExpiration 1013.08
56 TestDockerFlags 247.11
57 TestForceSystemdFlag 236.49
58 TestForceSystemdEnv 515.33
65 TestErrorSpam/start 16.35
66 TestErrorSpam/status 34.75
67 TestErrorSpam/pause 21.59
68 TestErrorSpam/unpause 21.62
69 TestErrorSpam/stop 46.2
72 TestFunctional/serial/CopySyncFile 0.03
74 TestFunctional/serial/AuditLog 0
80 TestFunctional/serial/CacheCmd/cache/add_remote 327.34
81 TestFunctional/serial/CacheCmd/cache/add_local 60.72
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.25
83 TestFunctional/serial/CacheCmd/cache/list 0.22
86 TestFunctional/serial/CacheCmd/cache/delete 0.48
91 TestFunctional/serial/LogsCmd 98.06
92 TestFunctional/serial/LogsFileCmd 120.5
104 TestFunctional/parallel/AddonsCmd 0.76
107 TestFunctional/parallel/SSHCmd 21.97
108 TestFunctional/parallel/CpCmd 57.42
110 TestFunctional/parallel/FileSync 9.78
117 TestFunctional/parallel/NonActiveRuntimeDisabled 10.1
119 TestFunctional/parallel/License 2.48
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
132 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
136 TestFunctional/parallel/ProfileCmd/profile_not_create 8.29
137 TestFunctional/parallel/ProfileCmd/profile_list 8.13
138 TestFunctional/parallel/ProfileCmd/profile_json_output 8.29
139 TestFunctional/parallel/Version/short 0.21
140 TestFunctional/parallel/Version/components 7.44
143 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 2.38
144 TestFunctional/parallel/UpdateContextCmd/no_clusters 2.44
150 TestFunctional/parallel/ImageCommands/Setup 3.8
155 TestFunctional/parallel/ImageCommands/ImageRemove 120.66
157 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 60.12
158 TestFunctional/delete_addon-resizer_images 0.41
159 TestFunctional/delete_my-image_image 0.17
160 TestFunctional/delete_minikube_cached_images 0.17
164 TestImageBuild/serial/Setup 181.32
165 TestImageBuild/serial/NormalBuild 8.94
166 TestImageBuild/serial/BuildWithBuildArg 7.87
167 TestImageBuild/serial/BuildWithDockerIgnore 7.13
168 TestImageBuild/serial/BuildWithSpecifiedDockerfile 7.02
177 TestJSONOutput/start/Command 194.11
178 TestJSONOutput/start/Audit 0
180 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/pause/Command 7.41
184 TestJSONOutput/pause/Audit 0
186 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/unpause/Command 7.38
190 TestJSONOutput/unpause/Audit 0
192 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/stop/Command 29.19
196 TestJSONOutput/stop/Audit 0
198 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
200 TestErrorJSONOutput 1.34
205 TestMainNoArgs 0.22
206 TestMinikubeProfile 469.84
209 TestMountStart/serial/StartWithMountFirst 141.1
210 TestMountStart/serial/VerifyMountFirst 9.09
211 TestMountStart/serial/StartWithMountSecond 140.66
212 TestMountStart/serial/VerifyMountSecond 8.96
213 TestMountStart/serial/DeleteFirst 22.15
214 TestMountStart/serial/VerifyMountPostDelete 8.96
215 TestMountStart/serial/Stop 22.2
216 TestMountStart/serial/RestartStopped 106.24
217 TestMountStart/serial/VerifyMountPostStop 9.02
220 TestMultiNode/serial/FreshStart2Nodes 393.87
221 TestMultiNode/serial/DeployApp2Nodes 8.44
224 TestMultiNode/serial/MultiNodeLabels 0.15
225 TestMultiNode/serial/ProfileList 7.01
227 TestMultiNode/serial/StopNode 73.41
228 TestMultiNode/serial/StartAfterStop 167.9
230 TestMultiNode/serial/DeleteNode 59.7
231 TestMultiNode/serial/StopMultiNode 71.1
237 TestScheduledStopWindows 311.72
247 TestNoKubernetes/serial/StartNoK8sWithVersion 0.27
260 TestStoppedBinaryUpgrade/Setup 0.63
261 TestStoppedBinaryUpgrade/Upgrade 736.19
262 TestStoppedBinaryUpgrade/MinikubeLogs 9.22
271 TestPause/serial/Start 450.13
x
+
TestDownloadOnly/v1.16.0/json-events (15.47s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-695400 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-695400 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=hyperv: (15.4702826s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (15.47s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-695400
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-695400: exit status 85 (248.0745ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-695400 | minikube5\jenkins | v1.32.0 | 29 Feb 24 00:42 UTC |          |
	|         | -p download-only-695400        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 00:42:20
	Running on machine: minikube5
	Binary: Built with gc go1.22.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 00:42:20.748663    6596 out.go:291] Setting OutFile to fd 640 ...
	I0229 00:42:20.749219    6596 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 00:42:20.749219    6596 out.go:304] Setting ErrFile to fd 644...
	I0229 00:42:20.749219    6596 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 00:42:20.760587    6596 root.go:314] Error reading config file at C:\Users\jenkins.minikube5\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube5\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0229 00:42:20.770030    6596 out.go:298] Setting JSON to true
	I0229 00:42:20.774013    6596 start.go:129] hostinfo: {"hostname":"minikube5","uptime":263567,"bootTime":1708903772,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0229 00:42:20.774309    6596 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 00:42:20.776144    6596 out.go:97] [download-only-695400] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0229 00:42:20.776393    6596 notify.go:220] Checking for updates...
	W0229 00:42:20.776393    6596 preload.go:295] Failed to list preload files: open C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0229 00:42:20.776913    6596 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 00:42:20.777400    6596 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0229 00:42:20.777824    6596 out.go:169] MINIKUBE_LOCATION=18063
	I0229 00:42:20.778325    6596 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0229 00:42:20.779518    6596 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0229 00:42:20.780493    6596 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 00:42:26.091734    6596 out.go:97] Using the hyperv driver based on user configuration
	I0229 00:42:26.091867    6596 start.go:299] selected driver: hyperv
	I0229 00:42:26.091927    6596 start.go:903] validating driver "hyperv" against <nil>
	I0229 00:42:26.092406    6596 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 00:42:26.148262    6596 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0229 00:42:26.149262    6596 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0229 00:42:26.149262    6596 cni.go:84] Creating CNI manager for ""
	I0229 00:42:26.149262    6596 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0229 00:42:26.149262    6596 start_flags.go:323] config:
	{Name:download-only-695400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-695400 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 00:42:26.151276    6596 iso.go:125] acquiring lock: {Name:mk91f2ee29fbed5605669750e8cfa308a1229357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 00:42:26.152264    6596 out.go:97] Downloading VM boot image ...
	I0229 00:42:26.152264    6596 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.32.1-1708638130-18020-amd64.iso
	I0229 00:42:30.765599    6596 out.go:97] Starting control plane node download-only-695400 in cluster download-only-695400
	I0229 00:42:30.766598    6596 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0229 00:42:30.813304    6596 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0229 00:42:30.813304    6596 cache.go:56] Caching tarball of preloaded images
	I0229 00:42:30.814558    6596 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0229 00:42:30.815587    6596 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0229 00:42:30.815587    6596 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0229 00:42:30.882004    6596 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0229 00:42:34.202258    6596 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0229 00:42:34.202592    6596 preload.go:256] verifying checksum of C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0229 00:42:35.163988    6596 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0229 00:42:35.164301    6596 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\download-only-695400\config.json ...
	I0229 00:42:35.164987    6596 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\download-only-695400\config.json: {Name:mk697d17c7a7d4b8a7b1be348af4f32e5f44344b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 00:42:35.166040    6596 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0229 00:42:35.167360    6596 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/windows/amd64/kubectl.exe.sha1 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\windows\amd64\v1.16.0/kubectl.exe
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-695400"

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 00:42:36.267756   13248 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAll (1.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.0823722s)
--- PASS: TestDownloadOnly/v1.16.0/DeleteAll (1.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (1.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-695400
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-695400: (1.0601415s)
--- PASS: TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (1.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (11.67s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-923600 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-923600 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=hyperv: (11.6715323s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (11.67s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
--- PASS: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-923600
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-923600: exit status 85 (224.4342ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-695400 | minikube5\jenkins | v1.32.0 | 29 Feb 24 00:42 UTC |                     |
	|         | -p download-only-695400        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	| delete  | --all                          | minikube             | minikube5\jenkins | v1.32.0 | 29 Feb 24 00:42 UTC | 29 Feb 24 00:42 UTC |
	| delete  | -p download-only-695400        | download-only-695400 | minikube5\jenkins | v1.32.0 | 29 Feb 24 00:42 UTC | 29 Feb 24 00:42 UTC |
	| start   | -o=json --download-only        | download-only-923600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 00:42 UTC |                     |
	|         | -p download-only-923600        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 00:42:38
	Running on machine: minikube5
	Binary: Built with gc go1.22.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 00:42:38.680466    7624 out.go:291] Setting OutFile to fd 764 ...
	I0229 00:42:38.680466    7624 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 00:42:38.680466    7624 out.go:304] Setting ErrFile to fd 688...
	I0229 00:42:38.680466    7624 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 00:42:38.704460    7624 out.go:298] Setting JSON to true
	I0229 00:42:38.707458    7624 start.go:129] hostinfo: {"hostname":"minikube5","uptime":263585,"bootTime":1708903772,"procs":193,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0229 00:42:38.707458    7624 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 00:42:38.708458    7624 out.go:97] [download-only-923600] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0229 00:42:38.708458    7624 notify.go:220] Checking for updates...
	I0229 00:42:38.709472    7624 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 00:42:38.710471    7624 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0229 00:42:38.710471    7624 out.go:169] MINIKUBE_LOCATION=18063
	I0229 00:42:38.711460    7624 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0229 00:42:38.712459    7624 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0229 00:42:38.713460    7624 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 00:42:43.928020    7624 out.go:97] Using the hyperv driver based on user configuration
	I0229 00:42:43.928020    7624 start.go:299] selected driver: hyperv
	I0229 00:42:43.928020    7624 start.go:903] validating driver "hyperv" against <nil>
	I0229 00:42:43.928686    7624 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 00:42:43.977921    7624 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0229 00:42:43.979115    7624 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0229 00:42:43.979210    7624 cni.go:84] Creating CNI manager for ""
	I0229 00:42:43.979210    7624 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0229 00:42:43.979361    7624 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0229 00:42:43.979441    7624 start_flags.go:323] config:
	{Name:download-only-923600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-923600 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 00:42:43.979707    7624 iso.go:125] acquiring lock: {Name:mk91f2ee29fbed5605669750e8cfa308a1229357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 00:42:43.980894    7624 out.go:97] Starting control plane node download-only-923600 in cluster download-only-923600
	I0229 00:42:43.980894    7624 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 00:42:44.022041    7624 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0229 00:42:44.022268    7624 cache.go:56] Caching tarball of preloaded images
	I0229 00:42:44.022424    7624 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 00:42:44.023565    7624 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0229 00:42:44.023641    7624 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I0229 00:42:44.095121    7624 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4?checksum=md5:7ebdea7754e21f51b865dbfc36b53b7d -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0229 00:42:47.979768    7624 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I0229 00:42:47.980417    7624 preload.go:256] verifying checksum of C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-923600"

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 00:42:50.309309    3652 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (1.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.1479609s)
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (1.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (1.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-923600
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-923600: (1.076075s)
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (1.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (10.83s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-189600 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-189600 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=hyperv: (10.8272755s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (10.83s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
--- PASS: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-189600
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-189600: exit status 85 (224.6648ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-695400 | minikube5\jenkins | v1.32.0 | 29 Feb 24 00:42 UTC |                     |
	|         | -p download-only-695400           |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr         |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |                   |         |                     |                     |
	|         | --container-runtime=docker        |                      |                   |         |                     |                     |
	|         | --driver=hyperv                   |                      |                   |         |                     |                     |
	| delete  | --all                             | minikube             | minikube5\jenkins | v1.32.0 | 29 Feb 24 00:42 UTC | 29 Feb 24 00:42 UTC |
	| delete  | -p download-only-695400           | download-only-695400 | minikube5\jenkins | v1.32.0 | 29 Feb 24 00:42 UTC | 29 Feb 24 00:42 UTC |
	| start   | -o=json --download-only           | download-only-923600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 00:42 UTC |                     |
	|         | -p download-only-923600           |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr         |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |                   |         |                     |                     |
	|         | --container-runtime=docker        |                      |                   |         |                     |                     |
	|         | --driver=hyperv                   |                      |                   |         |                     |                     |
	| delete  | --all                             | minikube             | minikube5\jenkins | v1.32.0 | 29 Feb 24 00:42 UTC | 29 Feb 24 00:42 UTC |
	| delete  | -p download-only-923600           | download-only-923600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 00:42 UTC | 29 Feb 24 00:42 UTC |
	| start   | -o=json --download-only           | download-only-189600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 00:42 UTC |                     |
	|         | -p download-only-189600           |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr         |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |                   |         |                     |                     |
	|         | --container-runtime=docker        |                      |                   |         |                     |                     |
	|         | --driver=hyperv                   |                      |                   |         |                     |                     |
	|---------|-----------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 00:42:52
	Running on machine: minikube5
	Binary: Built with gc go1.22.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 00:42:52.824884    3688 out.go:291] Setting OutFile to fd 636 ...
	I0229 00:42:52.825471    3688 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 00:42:52.825471    3688 out.go:304] Setting ErrFile to fd 720...
	I0229 00:42:52.825471    3688 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 00:42:52.845786    3688 out.go:298] Setting JSON to true
	I0229 00:42:52.849788    3688 start.go:129] hostinfo: {"hostname":"minikube5","uptime":263599,"bootTime":1708903772,"procs":193,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0229 00:42:52.849964    3688 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 00:42:52.851012    3688 out.go:97] [download-only-189600] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0229 00:42:52.851682    3688 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 00:42:52.851682    3688 notify.go:220] Checking for updates...
	I0229 00:42:52.852922    3688 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0229 00:42:52.853035    3688 out.go:169] MINIKUBE_LOCATION=18063
	I0229 00:42:52.853746    3688 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0229 00:42:52.854637    3688 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0229 00:42:52.855865    3688 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 00:42:58.071295    3688 out.go:97] Using the hyperv driver based on user configuration
	I0229 00:42:58.071413    3688 start.go:299] selected driver: hyperv
	I0229 00:42:58.071413    3688 start.go:903] validating driver "hyperv" against <nil>
	I0229 00:42:58.071758    3688 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 00:42:58.124586    3688 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0229 00:42:58.125983    3688 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0229 00:42:58.126116    3688 cni.go:84] Creating CNI manager for ""
	I0229 00:42:58.126168    3688 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0229 00:42:58.126168    3688 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0229 00:42:58.126268    3688 start_flags.go:323] config:
	{Name:download-only-189600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-189600 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 00:42:58.127070    3688 iso.go:125] acquiring lock: {Name:mk91f2ee29fbed5605669750e8cfa308a1229357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 00:42:58.128814    3688 out.go:97] Starting control plane node download-only-189600 in cluster download-only-189600
	I0229 00:42:58.128923    3688 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0229 00:42:58.166558    3688 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0229 00:42:58.166672    3688 cache.go:56] Caching tarball of preloaded images
	I0229 00:42:58.167064    3688 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0229 00:42:58.168038    3688 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0229 00:42:58.168152    3688 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0229 00:42:58.236881    3688 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4?checksum=md5:47acda482c3add5b56147c92b8d7f468 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0229 00:43:01.455383    3688 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0229 00:43:01.456398    3688 preload.go:256] verifying checksum of C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0229 00:43:02.396445    3688 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on docker
	I0229 00:43:02.397750    3688 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\download-only-189600\config.json ...
	I0229 00:43:02.398165    3688 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\download-only-189600\config.json: {Name:mk1eae2d63424e2d660f70324fd6fa6f2ae025d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 00:43:02.398464    3688 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0229 00:43:02.399628    3688 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\windows\amd64\v1.29.0-rc.2/kubectl.exe
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-189600"

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 00:43:03.589529    9180 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (1.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.0860594s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (1.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (1.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-189600
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-189600: (1.0840288s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (1.08s)

                                                
                                    
x
+
TestBinaryMirror (6.63s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-085100 --alsologtostderr --binary-mirror http://127.0.0.1:63880 --driver=hyperv
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-085100 --alsologtostderr --binary-mirror http://127.0.0.1:63880 --driver=hyperv: (5.8496879s)
helpers_test.go:175: Cleaning up "binary-mirror-085100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-085100
--- PASS: TestBinaryMirror (6.63s)

                                                
                                    
x
+
TestOffline (399.18s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-419900 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-419900 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv: (5m55.9395588s)
helpers_test.go:175: Cleaning up "offline-docker-419900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-419900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-419900: (43.2385598s)
--- PASS: TestOffline (399.18s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.23s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-611800
addons_test.go:928: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-611800: exit status 85 (227.6028ms)

                                                
                                                
-- stdout --
	* Profile "addons-611800" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-611800"

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 00:43:15.784166    5512 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.23s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.24s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-611800
addons_test.go:939: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-611800: exit status 85 (239.3567ms)

                                                
                                                
-- stdout --
	* Profile "addons-611800" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-611800"

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 00:43:15.784166    5180 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.24s)

                                                
                                    
x
+
TestAddons/Setup (372.53s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-611800 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-611800 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller: (6m12.5259387s)
--- PASS: TestAddons/Setup (372.53s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (65.93s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-611800 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-611800 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-611800 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [b2c0d865-ce01-4365-a06c-75018c5b906c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [b2c0d865-ce01-4365-a06c-75018c5b906c] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.0132438s
addons_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-611800 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe -p addons-611800 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (9.8138833s)
addons_test.go:269: debug: unexpected stderr for out/minikube-windows-amd64.exe -p addons-611800 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'":
W0229 00:50:33.076749    9568 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
addons_test.go:286: (dbg) Run:  kubectl --context addons-611800 replace --force -f testdata\ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-611800 ip
addons_test.go:291: (dbg) Done: out/minikube-windows-amd64.exe -p addons-611800 ip: (2.7414368s)
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 172.19.6.238
addons_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-611800 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-windows-amd64.exe -p addons-611800 addons disable ingress-dns --alsologtostderr -v=1: (16.5339923s)
addons_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-611800 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe -p addons-611800 addons disable ingress --alsologtostderr -v=1: (21.8043178s)
--- PASS: TestAddons/parallel/Ingress (65.93s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (25.78s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-7dbzk" [e1ac2e19-287b-4564-ae47-7558cfdaa585] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.0262056s
addons_test.go:841: (dbg) Run:  out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-611800
addons_test.go:841: (dbg) Done: out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-611800: (20.7449991s)
--- PASS: TestAddons/parallel/InspektorGadget (25.78s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (21.56s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 12.8081ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-tb5qv" [7f3da268-35bd-453c-89c0-fb98c75d9b42] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.0163935s
addons_test.go:415: (dbg) Run:  kubectl --context addons-611800 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-611800 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:432: (dbg) Done: out/minikube-windows-amd64.exe -p addons-611800 addons disable metrics-server --alsologtostderr -v=1: (15.3464144s)
--- PASS: TestAddons/parallel/MetricsServer (21.56s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (39.99s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 7.08ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-slwdh" [882a5072-a97f-4b3e-b766-dd55157831dd] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.0144645s
addons_test.go:473: (dbg) Run:  kubectl --context addons-611800 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-611800 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (18.5317786s)
addons_test.go:490: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-611800 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:490: (dbg) Done: out/minikube-windows-amd64.exe -p addons-611800 addons disable helm-tiller --alsologtostderr -v=1: (15.4247937s)
--- PASS: TestAddons/parallel/HelmTiller (39.99s)

                                                
                                    
x
+
TestAddons/parallel/CSI (91.56s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 27.6047ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-611800 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-611800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-611800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-611800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-611800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-611800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-611800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-611800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-611800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-611800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-611800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-611800 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-611800 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [972e3246-0a36-4f99-887e-d8cb5560e5b9] Pending
helpers_test.go:344: "task-pv-pod" [972e3246-0a36-4f99-887e-d8cb5560e5b9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [972e3246-0a36-4f99-887e-d8cb5560e5b9] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 21.0170855s
addons_test.go:584: (dbg) Run:  kubectl --context addons-611800 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-611800 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-611800 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-611800 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-611800 delete pod task-pv-pod: (1.1828086s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-611800 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-611800 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-611800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-611800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-611800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-611800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-611800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-611800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-611800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-611800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-611800 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [70f4dd2d-bf0a-4111-93a6-95722e46dd5d] Pending
helpers_test.go:344: "task-pv-pod-restore" [70f4dd2d-bf0a-4111-93a6-95722e46dd5d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [70f4dd2d-bf0a-4111-93a6-95722e46dd5d] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.0233756s
addons_test.go:626: (dbg) Run:  kubectl --context addons-611800 delete pod task-pv-pod-restore
addons_test.go:626: (dbg) Done: kubectl --context addons-611800 delete pod task-pv-pod-restore: (1.8722179s)
addons_test.go:630: (dbg) Run:  kubectl --context addons-611800 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-611800 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-611800 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-windows-amd64.exe -p addons-611800 addons disable csi-hostpath-driver --alsologtostderr -v=1: (21.986096s)
addons_test.go:642: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-611800 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:642: (dbg) Done: out/minikube-windows-amd64.exe -p addons-611800 addons disable volumesnapshots --alsologtostderr -v=1: (16.3991885s)
--- PASS: TestAddons/parallel/CSI (91.56s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (33.5s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-611800 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-611800 --alsologtostderr -v=1: (16.4778758s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-999np" [fc8e14f4-407e-41e2-8625-822b2756a23b] Pending
helpers_test.go:344: "headlamp-7ddfbb94ff-999np" [fc8e14f4-407e-41e2-8625-822b2756a23b] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-999np" [fc8e14f4-407e-41e2-8625-822b2756a23b] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 17.0169314s
--- PASS: TestAddons/parallel/Headlamp (33.50s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (21.28s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6548d5df46-x7c4s" [bdf6946f-67f2-4d22-8c83-91064133909e] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.0194486s
addons_test.go:860: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-611800
addons_test.go:860: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-611800: (16.2435596s)
--- PASS: TestAddons/parallel/CloudSpanner (21.28s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (28.8s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-611800 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-611800 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-611800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-611800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-611800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-611800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-611800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-611800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-611800 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [c5653ae4-d31f-4fd1-a3ae-5032ca1dc1f2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [c5653ae4-d31f-4fd1-a3ae-5032ca1dc1f2] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [c5653ae4-d31f-4fd1-a3ae-5032ca1dc1f2] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.0213252s
addons_test.go:891: (dbg) Run:  kubectl --context addons-611800 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-611800 ssh "cat /opt/local-path-provisioner/pvc-2ac34051-4600-43ad-afd5-2be80059d3d9_default_test-pvc/file1"
addons_test.go:900: (dbg) Done: out/minikube-windows-amd64.exe -p addons-611800 ssh "cat /opt/local-path-provisioner/pvc-2ac34051-4600-43ad-afd5-2be80059d3d9_default_test-pvc/file1": (9.1647943s)
addons_test.go:912: (dbg) Run:  kubectl --context addons-611800 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-611800 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-611800 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-windows-amd64.exe -p addons-611800 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (7.1570982s)
--- PASS: TestAddons/parallel/LocalPath (28.80s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (20.32s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-l9vxg" [047843a7-3028-44d7-93cb-eba90afa89ca] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.0152927s
addons_test.go:955: (dbg) Run:  out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-611800
addons_test.go:955: (dbg) Done: out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-611800: (14.3008906s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (20.32s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-g8dx5" [169f5ace-0eea-4a40-9034-17dd7344fa3b] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.0073471s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.29s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-611800 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-611800 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.29s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (47.58s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-611800
addons_test.go:172: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-611800: (36.4978829s)
addons_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-611800
addons_test.go:176: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-611800: (4.387655s)
addons_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-611800
addons_test.go:180: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-611800: (4.2991267s)
addons_test.go:185: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-611800
addons_test.go:185: (dbg) Done: out/minikube-windows-amd64.exe addons disable gvisor -p addons-611800: (2.3982172s)
--- PASS: TestAddons/StoppedEnableDisable (47.58s)

                                                
                                    
x
+
TestCertOptions (338.65s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-620400 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-620400 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv: (4m40.3808518s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-620400 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-620400 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (10.7093676s)
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-620400 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-620400 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-620400 -- "sudo cat /etc/kubernetes/admin.conf": (9.2994719s)
helpers_test.go:175: Cleaning up "cert-options-620400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-620400
E0229 03:19:28.979815    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-620400: (38.1379596s)
--- PASS: TestCertOptions (338.65s)

                                                
                                    
x
+
TestCertExpiration (1013.08s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-224600 --memory=2048 --cert-expiration=3m --driver=hyperv
E0229 03:04:28.934363    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-224600 --memory=2048 --cert-expiration=3m --driver=hyperv: (7m37.8583356s)
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-224600 --memory=2048 --cert-expiration=8760h --driver=hyperv
E0229 03:14:28.967759    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-224600 --memory=2048 --cert-expiration=8760h --driver=hyperv: (5m30.6477082s)
helpers_test.go:175: Cleaning up "cert-expiration-224600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-224600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-224600: (44.5707188s)
--- PASS: TestCertExpiration (1013.08s)

                                                
                                    
x
+
TestDockerFlags (247.11s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-601400 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-601400 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv: (3m4.682558s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-601400 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-601400 ssh "sudo systemctl show docker --property=Environment --no-pager": (9.4712173s)
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-601400 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-601400 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (9.6192117s)
helpers_test.go:175: Cleaning up "docker-flags-601400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-601400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-601400: (43.3374969s)
--- PASS: TestDockerFlags (247.11s)

                                                
                                    
x
+
TestForceSystemdFlag (236.49s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-419900 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-419900 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv: (3m4.5747431s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-419900 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-419900 ssh "docker info --format {{.CgroupDriver}}": (9.3738979s)
helpers_test.go:175: Cleaning up "force-systemd-flag-419900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-419900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-419900: (42.5362986s)
--- PASS: TestForceSystemdFlag (236.49s)

                                                
                                    
x
+
TestForceSystemdEnv (515.33s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-812500 --memory=2048 --alsologtostderr -v=5 --driver=hyperv
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-812500 --memory=2048 --alsologtostderr -v=5 --driver=hyperv: (7m49.2985508s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-812500 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-812500 ssh "docker info --format {{.CgroupDriver}}": (9.4381066s)
helpers_test.go:175: Cleaning up "force-systemd-env-812500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-812500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-812500: (36.5913155s)
--- PASS: TestForceSystemdEnv (515.33s)

                                                
                                    
x
+
TestErrorSpam/start (16.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-384500 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-384500 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-384500 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-384500 start --dry-run: (5.4495568s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-384500 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-384500 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-384500 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-384500 start --dry-run: (5.4296491s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-384500 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-384500 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-384500 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-384500 start --dry-run: (5.4612807s)
--- PASS: TestErrorSpam/start (16.35s)

                                                
                                    
x
+
TestErrorSpam/status (34.75s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-384500 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-384500 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-384500 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-384500 status: (11.9534735s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-384500 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-384500 status
E0229 00:57:12.509036    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-384500 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-384500 status: (11.3696367s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-384500 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-384500 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-384500 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-384500 status: (11.4276274s)
--- PASS: TestErrorSpam/status (34.75s)

                                                
                                    
x
+
TestErrorSpam/pause (21.59s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-384500 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-384500 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-384500 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-384500 pause: (7.3331618s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-384500 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-384500 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-384500 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-384500 pause: (7.164088s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-384500 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-384500 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-384500 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-384500 pause: (7.089133s)
--- PASS: TestErrorSpam/pause (21.59s)

                                                
                                    
x
+
TestErrorSpam/unpause (21.62s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-384500 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-384500 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-384500 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-384500 unpause: (7.28491s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-384500 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-384500 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-384500 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-384500 unpause: (7.2288773s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-384500 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-384500 unpause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-384500 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-384500 unpause: (7.1069548s)
--- PASS: TestErrorSpam/unpause (21.62s)

                                                
                                    
x
+
TestErrorSpam/stop (46.2s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-384500 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-384500 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-384500 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-384500 stop: (29.2382835s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-384500 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-384500 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-384500 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-384500 stop: (8.5967848s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-384500 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-384500 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-384500 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-384500 stop: (8.3601515s)
--- PASS: TestErrorSpam/stop (46.20s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\test\nested\copy\3312\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (327.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-583600 cache add registry.k8s.io/pause:3.1: (1m26.3408811s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 cache add registry.k8s.io/pause:3.3
E0229 01:09:28.551970    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-583600 cache add registry.k8s.io/pause:3.3: (2m0.4968902s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 cache add registry.k8s.io/pause:latest
E0229 01:10:51.776793    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-583600 cache add registry.k8s.io/pause:latest: (2m0.5004208s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (327.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (60.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-583600 C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2567175620\001
functional_test.go:1073: (dbg) Done: docker build -t minikube-local-cache-test:functional-583600 C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2567175620\001: (1.2673833s)
functional_test.go:1085: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 cache add minikube-local-cache-test:functional-583600
functional_test.go:1085: (dbg) Done: out/minikube-windows-amd64.exe -p functional-583600 cache add minikube-local-cache-test:functional-583600: (59.0421865s)
functional_test.go:1090: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 cache delete minikube-local-cache-test:functional-583600
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-583600
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (60.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.48s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.48s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (98.06s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 logs
E0229 01:19:28.576619    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
functional_test.go:1232: (dbg) Done: out/minikube-windows-amd64.exe -p functional-583600 logs: (1m38.0597288s)
--- PASS: TestFunctional/serial/LogsCmd (98.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (120.5s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 logs --file C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalserialLogsFileCmd2943917967\001\logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-windows-amd64.exe -p functional-583600 logs --file C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalserialLogsFileCmd2943917967\001\logs.txt: (2m0.4962237s)
--- PASS: TestFunctional/serial/LogsFileCmd (120.50s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (21.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 ssh "echo hello"
functional_test.go:1721: (dbg) Done: out/minikube-windows-amd64.exe -p functional-583600 ssh "echo hello": (11.6292349s)
functional_test.go:1738: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Done: out/minikube-windows-amd64.exe -p functional-583600 ssh "cat /etc/hostname": (10.3437588s)
--- PASS: TestFunctional/parallel/SSHCmd (21.97s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (57.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-583600 cp testdata\cp-test.txt /home/docker/cp-test.txt: (9.3731809s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 ssh -n functional-583600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-583600 ssh -n functional-583600 "sudo cat /home/docker/cp-test.txt": (10.2907702s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 cp functional-583600:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalparallelCpCmd2518891944\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-583600 cp functional-583600:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalparallelCpCmd2518891944\001\cp-test.txt: (10.1171413s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 ssh -n functional-583600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-583600 ssh -n functional-583600 "sudo cat /home/docker/cp-test.txt": (9.861775s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-583600 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (7.6857395s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 ssh -n functional-583600 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-583600 ssh -n functional-583600 "sudo cat /tmp/does/not/exist/cp-test.txt": (10.0808065s)
--- PASS: TestFunctional/parallel/CpCmd (57.42s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (9.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/3312/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 ssh "sudo cat /etc/test/nested/copy/3312/hosts"
functional_test.go:1927: (dbg) Done: out/minikube-windows-amd64.exe -p functional-583600 ssh "sudo cat /etc/test/nested/copy/3312/hosts": (9.7789723s)
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (9.78s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (10.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-583600 ssh "sudo systemctl is-active crio": exit status 1 (10.1046197s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 01:23:27.661162    4212 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (10.10s)

                                                
                                    
x
+
TestFunctional/parallel/License (2.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2284: (dbg) Done: out/minikube-windows-amd64.exe license: (2.4581605s)
--- PASS: TestFunctional/parallel/License (2.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-583600 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-583600 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2380: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (8.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1271: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (7.8835357s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (8.29s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (8.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1306: (dbg) Done: out/minikube-windows-amd64.exe profile list: (7.8856563s)
functional_test.go:1311: Took "7.8859367s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1325: Took "243.5224ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (8.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (8.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1357: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (8.0391912s)
functional_test.go:1362: Took "8.0393532s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1375: Took "253.718ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (8.29s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 version --short
--- PASS: TestFunctional/parallel/Version/short (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (7.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-windows-amd64.exe -p functional-583600 version -o=json --components: (7.435323s)
--- PASS: TestFunctional/parallel/Version/components (7.44s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-583600 update-context --alsologtostderr -v=2: (2.377339s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.38s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (2.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-583600 update-context --alsologtostderr -v=2: (2.4426552s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (2.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (3.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (3.5853543s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-583600
--- PASS: TestFunctional/parallel/ImageCommands/Setup (3.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (120.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 image rm gcr.io/google-containers/addon-resizer:functional-583600 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-windows-amd64.exe -p functional-583600 image rm gcr.io/google-containers/addon-resizer:functional-583600 --alsologtostderr: (1m0.3790259s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-583600 image ls: (1m0.283016s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (120.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (60.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-583600
functional_test.go:423: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-583600 image save --daemon gcr.io/google-containers/addon-resizer:functional-583600 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-windows-amd64.exe -p functional-583600 image save --daemon gcr.io/google-containers/addon-resizer:functional-583600 --alsologtostderr: (59.7653811s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-583600
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (60.12s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.41s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-583600
--- PASS: TestFunctional/delete_addon-resizer_images (0.41s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.17s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-583600
--- PASS: TestFunctional/delete_my-image_image (0.17s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.17s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-583600
--- PASS: TestFunctional/delete_minikube_cached_images (0.17s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (181.32s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-326100 --driver=hyperv
E0229 01:39:28.647058    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-326100 --driver=hyperv: (3m1.324796s)
--- PASS: TestImageBuild/serial/Setup (181.32s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (8.94s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-326100
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-326100: (8.9400343s)
--- PASS: TestImageBuild/serial/NormalBuild (8.94s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (7.87s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-326100
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-326100: (7.8692217s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (7.87s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (7.13s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-326100
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-326100: (7.1287891s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (7.13s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.02s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-326100
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-326100: (7.0207291s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.02s)

                                                
                                    
x
+
TestJSONOutput/start/Command (194.11s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-703200 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv
E0229 01:54:28.697226    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-703200 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv: (3m14.1047237s)
--- PASS: TestJSONOutput/start/Command (194.11s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (7.41s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-703200 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-703200 --output=json --user=testUser: (7.4085531s)
--- PASS: TestJSONOutput/pause/Command (7.41s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (7.38s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-703200 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-703200 --output=json --user=testUser: (7.3815501s)
--- PASS: TestJSONOutput/unpause/Command (7.38s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (29.19s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-703200 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-703200 --output=json --user=testUser: (29.1935567s)
--- PASS: TestJSONOutput/stop/Command (29.19s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.34s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-660700 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-660700 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (257.1887ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7f909e42-e1be-41de-8789-dd638f318bd4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-660700] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"db0efce3-3b97-4d5e-8ca7-3ade64686ba7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube5\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"7f1948a4-22bc-490b-aefb-11a63673244b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a10e0cfd-f04a-4ee0-9d22-b7c686f3a3de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"2b4eeda3-a8fc-4857-8775-a6ba8ef9f64b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18063"}}
	{"specversion":"1.0","id":"e8d69e52-bdda-4d56-aae3-2da1253753cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a6a4d129-c302-4731-b546-76f3589773df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 01:56:57.057384    9116 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "json-output-error-660700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-660700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-660700: (1.0856191s)
--- PASS: TestErrorJSONOutput (1.34s)

                                                
                                    
x
+
TestMainNoArgs (0.22s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.22s)

                                                
                                    
x
+
TestMinikubeProfile (469.84s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-394800 --driver=hyperv
E0229 01:59:28.714633    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-394800 --driver=hyperv: (3m0.563805s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-394800 --driver=hyperv
E0229 02:00:51.967638    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-394800 --driver=hyperv: (3m0.6431696s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-394800
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (13.7665825s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-394800
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (13.6809529s)
helpers_test.go:175: Cleaning up "second-394800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-394800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-394800: (37.6667908s)
helpers_test.go:175: Cleaning up "first-394800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-394800
E0229 02:04:28.724297    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-394800: (42.7141887s)
--- PASS: TestMinikubeProfile (469.84s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (141.1s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-141600 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-141600 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m20.095829s)
--- PASS: TestMountStart/serial/StartWithMountFirst (141.10s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (9.09s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-141600 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-141600 ssh -- ls /minikube-host: (9.0901838s)
--- PASS: TestMountStart/serial/VerifyMountFirst (9.09s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (140.66s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-141600 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
E0229 02:09:28.750728    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-141600 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m19.648855s)
--- PASS: TestMountStart/serial/StartWithMountSecond (140.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (8.96s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-141600 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-141600 ssh -- ls /minikube-host: (8.9594333s)
--- PASS: TestMountStart/serial/VerifyMountSecond (8.96s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (22.15s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-141600 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-141600 --alsologtostderr -v=5: (22.1452497s)
--- PASS: TestMountStart/serial/DeleteFirst (22.15s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (8.96s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-141600 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-141600 ssh -- ls /minikube-host: (8.955079s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (8.96s)

                                                
                                    
x
+
TestMountStart/serial/Stop (22.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-141600
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-141600: (22.2036861s)
--- PASS: TestMountStart/serial/Stop (22.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (106.24s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-141600
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-141600: (1m45.2295607s)
--- PASS: TestMountStart/serial/RestartStopped (106.24s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (9.02s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-141600 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-141600 ssh -- ls /minikube-host: (9.0141604s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (9.02s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (393.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-314500 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv
E0229 02:14:28.768025    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
E0229 02:17:32.037196    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
multinode_test.go:86: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-314500 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv: (6m11.4843314s)
multinode_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-314500 status --alsologtostderr
E0229 02:19:28.778738    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
multinode_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-314500 status --alsologtostderr: (22.3875989s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (393.87s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (8.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-314500 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-314500 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-314500 -- rollout status deployment/busybox: (2.8346115s)
multinode_test.go:521: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-314500 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-314500 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-314500 -- exec busybox-5b5d89c9d6-826w2 -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-314500 -- exec busybox-5b5d89c9d6-826w2 -- nslookup kubernetes.io: (1.919014s)
multinode_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-314500 -- exec busybox-5b5d89c9d6-qcblm -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-314500 -- exec busybox-5b5d89c9d6-826w2 -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-314500 -- exec busybox-5b5d89c9d6-qcblm -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-314500 -- exec busybox-5b5d89c9d6-826w2 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-314500 -- exec busybox-5b5d89c9d6-qcblm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (8.44s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-314500 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.15s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (7.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (7.0084508s)
--- PASS: TestMultiNode/serial/ProfileList (7.01s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (73.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-314500 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-314500 node stop m03: (24.8102938s)
multinode_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-314500 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-314500 status: exit status 7 (24.2458515s)

                                                
                                                
-- stdout --
	multinode-314500
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-314500-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-314500-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 02:26:06.605983    3684 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:251: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-314500 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-314500 status --alsologtostderr: exit status 7 (24.356607s)

                                                
                                                
-- stdout --
	multinode-314500
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-314500-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-314500-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 02:26:30.857258   13956 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0229 02:26:30.918626   13956 out.go:291] Setting OutFile to fd 1528 ...
	I0229 02:26:30.919390   13956 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:26:30.919390   13956 out.go:304] Setting ErrFile to fd 1532...
	I0229 02:26:30.919465   13956 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:26:30.938308   13956 out.go:298] Setting JSON to false
	I0229 02:26:30.938308   13956 mustload.go:65] Loading cluster: multinode-314500
	I0229 02:26:30.938308   13956 notify.go:220] Checking for updates...
	I0229 02:26:30.939123   13956 config.go:182] Loaded profile config "multinode-314500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 02:26:30.939123   13956 status.go:255] checking status of multinode-314500 ...
	I0229 02:26:30.939726   13956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:26:32.941553   13956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:26:32.941649   13956 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:26:32.941649   13956 status.go:330] multinode-314500 host status = "Running" (err=<nil>)
	I0229 02:26:32.941727   13956 host.go:66] Checking if "multinode-314500" exists ...
	I0229 02:26:32.942376   13956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:26:34.964157   13956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:26:34.964157   13956 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:26:34.964157   13956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:26:37.369772   13956 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:26:37.369772   13956 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:26:37.369772   13956 host.go:66] Checking if "multinode-314500" exists ...
	I0229 02:26:37.381488   13956 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0229 02:26:37.382485   13956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:26:39.387965   13956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:26:39.388137   13956 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:26:39.388297   13956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500 ).networkadapters[0]).ipaddresses[0]
	I0229 02:26:41.796078   13956 main.go:141] libmachine: [stdout =====>] : 172.19.2.165
	
	I0229 02:26:41.796078   13956 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:26:41.796078   13956 sshutil.go:53] new ssh client: &{IP:172.19.2.165 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500\id_rsa Username:docker}
	I0229 02:26:41.897958   13956 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.5162173s)
	I0229 02:26:41.908792   13956 ssh_runner.go:195] Run: systemctl --version
	I0229 02:26:41.928816   13956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:26:41.958542   13956 kubeconfig.go:92] found "multinode-314500" server: "https://172.19.2.165:8443"
	I0229 02:26:41.959191   13956 api_server.go:166] Checking apiserver status ...
	I0229 02:26:41.969043   13956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:26:42.002720   13956 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2018/cgroup
	W0229 02:26:42.021446   13956 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2018/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:26:42.031448   13956 ssh_runner.go:195] Run: ls
	I0229 02:26:42.038366   13956 api_server.go:253] Checking apiserver healthz at https://172.19.2.165:8443/healthz ...
	I0229 02:26:42.047247   13956 api_server.go:279] https://172.19.2.165:8443/healthz returned 200:
	ok
	I0229 02:26:42.047247   13956 status.go:421] multinode-314500 apiserver status = Running (err=<nil>)
	I0229 02:26:42.048259   13956 status.go:257] multinode-314500 status: &{Name:multinode-314500 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0229 02:26:42.048316   13956 status.go:255] checking status of multinode-314500-m02 ...
	I0229 02:26:42.048930   13956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:26:44.016909   13956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:26:44.016909   13956 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:26:44.016909   13956 status.go:330] multinode-314500-m02 host status = "Running" (err=<nil>)
	I0229 02:26:44.016909   13956 host.go:66] Checking if "multinode-314500-m02" exists ...
	I0229 02:26:44.016909   13956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:26:46.036530   13956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:26:46.036530   13956 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:26:46.036791   13956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:26:48.433726   13956 main.go:141] libmachine: [stdout =====>] : 172.19.5.202
	
	I0229 02:26:48.434723   13956 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:26:48.434723   13956 host.go:66] Checking if "multinode-314500-m02" exists ...
	I0229 02:26:48.444466   13956 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0229 02:26:48.444466   13956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:26:50.463989   13956 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 02:26:50.464070   13956 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:26:50.464070   13956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-314500-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 02:26:52.920893   13956 main.go:141] libmachine: [stdout =====>] : 172.19.5.202
	
	I0229 02:26:52.920893   13956 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:26:52.921411   13956 sshutil.go:53] new ssh client: &{IP:172.19.5.202 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-314500-m02\id_rsa Username:docker}
	I0229 02:26:53.032639   13956 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.5878548s)
	I0229 02:26:53.042182   13956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:26:53.069434   13956 status.go:257] multinode-314500-m02 status: &{Name:multinode-314500-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0229 02:26:53.069475   13956 status.go:255] checking status of multinode-314500-m03 ...
	I0229 02:26:53.070045   13956 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m03 ).state
	I0229 02:26:55.076759   13956 main.go:141] libmachine: [stdout =====>] : Off
	
	I0229 02:26:55.077249   13956 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:26:55.077249   13956 status.go:330] multinode-314500-m03 host status = "Stopped" (err=<nil>)
	I0229 02:26:55.077329   13956 status.go:343] host is not running, skipping remaining checks
	I0229 02:26:55.077329   13956 status.go:257] multinode-314500-m03 status: &{Name:multinode-314500-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (73.41s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (167.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-314500 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-314500 node start m03 --alsologtostderr: (2m14.4324361s)
multinode_test.go:289: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-314500 status
E0229 02:29:28.807955    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
multinode_test.go:289: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-314500 status: (33.3128699s)
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (167.90s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (59.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-314500 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-314500 node delete m03: (36.9094534s)
multinode_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-314500 status --alsologtostderr
multinode_test.go:428: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-314500 status --alsologtostderr: (22.4773789s)
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (59.70s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (71.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-314500 stop
E0229 02:39:28.845835    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
multinode_test.go:342: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-314500 stop: (1m2.5935588s)
multinode_test.go:348: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-314500 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-314500 status: exit status 7 (4.265279s)

                                                
                                                
-- stdout --
	multinode-314500
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-314500-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 02:40:14.266555    8472 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:355: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-314500 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-314500 status --alsologtostderr: exit status 7 (4.2377392s)

                                                
                                                
-- stdout --
	multinode-314500
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-314500-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 02:40:18.530783    5052 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0229 02:40:18.594101    5052 out.go:291] Setting OutFile to fd 580 ...
	I0229 02:40:18.594101    5052 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:40:18.595099    5052 out.go:304] Setting ErrFile to fd 1432...
	I0229 02:40:18.595099    5052 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:40:18.607316    5052 out.go:298] Setting JSON to false
	I0229 02:40:18.607316    5052 mustload.go:65] Loading cluster: multinode-314500
	I0229 02:40:18.607602    5052 notify.go:220] Checking for updates...
	I0229 02:40:18.608171    5052 config.go:182] Loaded profile config "multinode-314500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 02:40:18.608317    5052 status.go:255] checking status of multinode-314500 ...
	I0229 02:40:18.608463    5052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500 ).state
	I0229 02:40:20.621955    5052 main.go:141] libmachine: [stdout =====>] : Off
	
	I0229 02:40:20.622065    5052 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:40:20.622065    5052 status.go:330] multinode-314500 host status = "Stopped" (err=<nil>)
	I0229 02:40:20.622065    5052 status.go:343] host is not running, skipping remaining checks
	I0229 02:40:20.622065    5052 status.go:257] multinode-314500 status: &{Name:multinode-314500 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0229 02:40:20.622065    5052 status.go:255] checking status of multinode-314500-m02 ...
	I0229 02:40:20.622972    5052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-314500-m02 ).state
	I0229 02:40:22.625369    5052 main.go:141] libmachine: [stdout =====>] : Off
	
	I0229 02:40:22.626421    5052 main.go:141] libmachine: [stderr =====>] : 
	I0229 02:40:22.626421    5052 status.go:330] multinode-314500-m02 host status = "Stopped" (err=<nil>)
	I0229 02:40:22.626421    5052 status.go:343] host is not running, skipping remaining checks
	I0229 02:40:22.626421    5052 status.go:257] multinode-314500-m02 status: &{Name:multinode-314500-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (71.10s)

                                                
                                    
x
+
TestScheduledStopWindows (311.72s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-553400 --memory=2048 --driver=hyperv
E0229 02:50:52.165668    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-553400 --memory=2048 --driver=hyperv: (3m1.6628251s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-553400 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-553400 --schedule 5m: (10.1459175s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-553400 -n scheduled-stop-553400
scheduled_stop_test.go:191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-553400 -n scheduled-stop-553400: exit status 1 (10.0241535s)

                                                
                                                
** stderr ** 
	W0229 02:52:53.200307    9240 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:191: status error: exit status 1 (may be ok)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-553400 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-553400 -- sudo systemctl show minikube-scheduled-stop --no-page: (8.9285845s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-553400 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-553400 --schedule 5s: (10.0376325s)
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-553400
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-553400: exit status 7 (2.2294456s)

                                                
                                                
-- stdout --
	scheduled-stop-553400
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 02:54:22.219143    5948 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-553400 -n scheduled-stop-553400
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-553400 -n scheduled-stop-553400: exit status 7 (2.2162065s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 02:54:24.440375   13908 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-553400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-553400
E0229 02:54:28.895603    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-553400: (26.4566737s)
--- PASS: TestScheduledStopWindows (311.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-419900 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-419900 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv: exit status 14 (267.9552ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-419900] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18063
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 02:54:53.139911    6668 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.63s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.63s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (736.19s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube-v1.26.0.2567547734.exe start -p stopped-upgrade-651500 --memory=2200 --vm-driver=hyperv
version_upgrade_test.go:183: (dbg) Done: C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube-v1.26.0.2567547734.exe start -p stopped-upgrade-651500 --memory=2200 --vm-driver=hyperv: (7m43.6553775s)
version_upgrade_test.go:192: (dbg) Run:  C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube-v1.26.0.2567547734.exe -p stopped-upgrade-651500 stop
E0229 03:09:28.938920    3312 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-611800\client.crt: The system cannot find the path specified.
version_upgrade_test.go:192: (dbg) Done: C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube-v1.26.0.2567547734.exe -p stopped-upgrade-651500 stop: (35.1808824s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-651500 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:198: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-651500 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (3m57.3535126s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (736.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (9.22s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-651500
version_upgrade_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-651500: (9.2238998s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (9.22s)

                                                
                                    
x
+
TestPause/serial/Start (450.13s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-783900 --memory=2048 --install-addons=false --wait=all --driver=hyperv
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-783900 --memory=2048 --install-addons=false --wait=all --driver=hyperv: (7m30.1268226s)
--- PASS: TestPause/serial/Start (450.13s)

                                                
                                    

Test skip (31/207)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-583600 --alsologtostderr -v=1]
functional_test.go:912: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-583600 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 1700: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (8.33s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (5.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-583600 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:970: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-583600 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0390194s)

                                                
                                                
-- stdout --
	* [functional-583600] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18063
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 01:23:33.326783    7036 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0229 01:23:33.387888    7036 out.go:291] Setting OutFile to fd 808 ...
	I0229 01:23:33.387888    7036 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:23:33.387888    7036 out.go:304] Setting ErrFile to fd 712...
	I0229 01:23:33.387888    7036 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:23:33.409891    7036 out.go:298] Setting JSON to false
	I0229 01:23:33.413893    7036 start.go:129] hostinfo: {"hostname":"minikube5","uptime":266040,"bootTime":1708903773,"procs":202,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0229 01:23:33.413893    7036 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 01:23:33.415883    7036 out.go:177] * [functional-583600] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0229 01:23:33.415883    7036 notify.go:220] Checking for updates...
	I0229 01:23:33.416910    7036 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 01:23:33.417893    7036 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 01:23:33.417893    7036 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0229 01:23:33.418897    7036 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 01:23:33.419888    7036 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 01:23:33.420893    7036 config.go:182] Loaded profile config "functional-583600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 01:23:33.421884    7036 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:976: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/DryRun (5.04s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (5.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-583600 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-583600 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0311034s)

                                                
                                                
-- stdout --
	* [functional-583600] minikube v1.32.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18063
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 01:23:34.395818   12132 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0229 01:23:34.469815   12132 out.go:291] Setting OutFile to fd 1012 ...
	I0229 01:23:34.470816   12132 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:23:34.470816   12132 out.go:304] Setting ErrFile to fd 748...
	I0229 01:23:34.470816   12132 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:23:34.489825   12132 out.go:298] Setting JSON to false
	I0229 01:23:34.492831   12132 start.go:129] hostinfo: {"hostname":"minikube5","uptime":266041,"bootTime":1708903773,"procs":202,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0229 01:23:34.492831   12132 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 01:23:34.493831   12132 out.go:177] * [functional-583600] minikube v1.32.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0229 01:23:34.494817   12132 notify.go:220] Checking for updates...
	I0229 01:23:34.494817   12132 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 01:23:34.495832   12132 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 01:23:34.496832   12132 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0229 01:23:34.496832   12132 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 01:23:34.497824   12132 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 01:23:34.498830   12132 config.go:182] Loaded profile config "functional-583600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 01:23:34.499831   12132 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:1021: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/InternationalLanguage (5.03s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on hyperv: https://github.com/kubernetes/minikube/issues/5029
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard